Threat Trends | 17 mins
Hackers’ Top 5 Cybersecurity Predictions for 2024
Hacking Manager at Hadrian
As we stand at the beginning of 2024, it's an opportune moment to assess the current state of the cybersecurity landscape and gear up for the upcoming year. The past year posed unique challenges with the evolving array of targets, exploits, and tools available to threat actors.
In 2023, supply chain attacks gained prominence as a rapidly growing vector for cyber threats, and both businesses and criminals embraced Large Language Models (LLMs). The surge in geopolitical tensions led to an increase in zero-day attacks, while the adoption of multi-cloud strategies hindered organizations' visibility into their environments.
For CISOs and security teams, navigating this dynamic landscape is paramount. Our forecasts, drawing insights from various analysts and Hadrian's experts, aim to offer actionable perspectives. By anticipating emerging trends and threats, organizations can proactively fortify their cybersecurity measures to tackle the challenges ahead.
Proliferation of LLM Code Injection Attacks
Generative AI is poised to revolutionize business’ products and services by significantly improving both customer experience and agent productivity. The technology’s ability to automate interactions with customers using natural language has already garnered attention, with McKinsey estimating a potential value of $404 billion for customer operations sources.
However, the implementation of data subjects' rights with Large Language Models presents notable challenges. The European Data Protection Supervisor states that “rectifying, deleting or even requesting access to personal data learned by LLMs, whether it is accurate or made up of “hallucinations”, may be difficult or impossible.”
Himanshu Patri, Hacker at Hadrian
Despite this challenge, 85% of data scientists and engineers say they have or plan to deploy LLM applications within the next 12 months.
- 1% - deployed more than 2 LLMs into production
- 13% - deployed 1-2 LLMs into production
- 44% - deployed LLMs for experimentation only
- 27% - plan to start using LLMs in the next 12 months
- 15% - have no plans to use LLMs
Security risks associated with Large Language Models are brought to the forefront through the concept of prompt injection attacks, as outlined in the OWASP Top 10 for Large Language Model Applications. The group defines prompt injection as “This manipulates a large language model (LLM) through crafty inputs, causing unintended actions by the LLM. Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.”
This manipulation technique can cause unintended actions by LLMs, with NCC Group providing further insights into how prompt injection attacks work and why Large Language Models are vulnerable to the technique. Among the unintended actions that LLMs can make, they can reveal proprietary information about how the model was trained and customer’s personal information.
Himanshu Patri, Hacker at Hadrian
Microsoft's "New Bing" search engine represents a real-world case study and illustrates the vulnerabilities of these AI systems. Just one day after the service’s launch a prompt injection attack by a Stanford University student named Kevin Liu unveiled the initial prompt governing end user interactions.
Looking ahead, the future of AI/LLM technology holds both promise and challenges. As these technologies continue to advance, security becomes an increasingly complex problem. Simon Willison's exploration of GPT-4-V highlights potential new vectors for prompt injection attacks. In his analysis, Simon demonstrated how an injection attack could be carried out using images uploaded to GTP-4-V, emphasizing the need for proactive security measures.
In response to these challenges, basic recommendations include implementing input allow-listing, input deny-listing, controlling input length, validating outputs, and incorporating robust monitoring and audit practices. However, prompt-based models will always be vulnerable to such attacks and companies should consider alternatives, such as fine-tuned learning models, as a potential solution to mitigate security risks associated with prompt-based models.
Mandatory Continuous Cloud Monitoring
Cloud services have gained widespread adoption due to a range of benefits, including low-cost data storage, rapid deployment of computing infrastructure for app development, and user access to valuable SaaS applications. A Google survey revealed that 41.4% of respondents planned to increase investment in cloud-based services and products, showcasing the growing significance of these solutions.
Furthermore, the survey revealed that selecting a trustworthy cloud provider was an important selection criterion. Enterprise cloud decision-makers prioritize capabilities such as "strong capabilities for protecting and controlling my data in the cloud" (40%) and compatibility with existing security solutions (38%).
Despite this prioritization, 39% of businesses reported experiencing a data breach in their cloud environment in the previous year, according to Thales research. This becomes more alarming as 75% of businesses reveal that over 40% of their cloud-stored data is sensitive, marking a significant increase from the previous year.
The lag in security practices maturity at defending the cloud is notable with 43% of organizations either in the early stages or had not yet initiated implementing practices to secure their cloud environments according to IBM’s Cost of a Data Breach report.
- 34% were at midstage.
- 23% were at the mature stage.
- 26% were in early-stage.
- 17% had no cloud security.
Arpit Borawake, Hacker at Hadrian
To address these challenges, organizations should take proactive measures alongside vigilant monitoring. This includes implementing robust access controls, encrypting data, conducting regular security assessments, and ensuring compliance with regulations and standards.
Arpit Borawake, Hacker at Hadrian
Considering the dynamic nature of cloud infrastructure, where developers can rapidly spin up virtual servers, maintaining an up-to-date map of cloud assets becomes challenging. Adopting a continuous approach to cloud monitoring becomes crucial, providing a timely perspective of the cloud attack surface and aiding in the identification of a broader range of potential security vulnerabilities.
Spearphishing's Acceleration in the AI Era
People represent the most vulnerable aspect of an organization in the face of phishing, scams, and fraud, a susceptibility further heightened by the introduction of generative AI chatbots. These advancements have increased the efficacy of hackers in executing spear-phishing and Business Email Compromise (BEC) attacks, leading to cybersecurity breaches, including financially motivated ransomware and data theft.
According to SlashNext, since the launch of ChatGPT at the end of 2022, there has been a staggering 1,265% increase in malicious phishing emails. Notably, 68% of all phishing emails employed text-based BEC tactics, solidifying concerns over the role of chatbots and jailbreaks in the exponential growth of phishing. Credential phishing has experienced a remarkable 967% increase, primarily driven by ransomware groups seeking access to companies in exchange for financial gain.
Melvin Lammerts, Hacking Manager at Hadrian
One emerging malicious generative AI is WormGPT, a model based on the open-source ChatGPT equivalent, GPT-J. While the exact training data remains undisclosed, it is rumored to involve malware-related content. This tool, available at a subscription price of €60 per month, has been discussed in hacker forums and was tested by a journalist, revealing its capability to craft well-written and personalized phishing emails.
Beyond WormGPT, FraudGPT is a malicious AI tailored for cyberattacks such as spear phishing and malware creation. While other cybercriminals use specialized generative AI, others exploit curated prompts to jailbreak ChatGPT in order to use it for malicious purposes. An online community named "ChatGPTJailbreak" on Reddit, boasting over 15k subscribers, discusses methods to circumvent the boundaries set by OpenAI.
The use of AI tooling in phishing attacks is expected to persist and multiply. The defense side typically lags behind attackers, emphasizing the need for reliable software utilizing AI to detect phishing attacks across various mediums. However, the development of such software is gradual and not without challenges such as cost, privacy concerns, and potential false positives.
Melvin Lammerts, Hacking Manager at Hadrian
To mitigate the threat, organizations should enforce least privilege access to make it harder for hackers to target specific individuals and their devices. Hadrian also recommends implementing routine and mandatory training on identifying phishing attacks, facilitating reporting measures, periodically sending fake phishing emails for employee awareness, and utilizing software that analyzes emails for potential threats.
Mass Attacks Targeting Network Zero-Days
Google Project Zero, an initiative comprising security analysts employed by Google to identify zero-day vulnerabilities, distinguishes zero-day vulnerabilities from “zero-day exploits in the wild,” which are vulnerabilities already exploited in cyber-attacks. According to Project Zero’s research there have been 56 zero-days exploited in the wild in 2023, the second highest ever recorded.
This is an increase from 2022 when Google Project Zero noted a 40% drop in detected and disclosed zero-days compared to 2021. This was attributed to a combination of security improvements and regressions. Furthermore, they found that over 40% of the discovered zero days were variants of previously reported vulnerabilities
Notable instances of zero-days exploited in the wild in 2023 include:
- A critical vulnerability in Cisco IOS XE software, was exploited by an unknown actor to backdoor vulnerable networks. Over 10,000 switches, routers, and other devices are known to have been compromised by the zero-day.
- Citrix Systems addressed three critical security flaws in Citrix Application Delivery Controller (ADC) and Citrix Gateway. These vulnerabilities enabled remote code execution and are known to have been actively exploited by threat actors.
- Fortinet issued updates for FortiGate products to address a critical vulnerability allowing remote code execution in its SSL VPN appliances. Shodan scans revealed that approximately 250,000 accessible Fortinet firewalls globally, making them susceptible to attack
Melvin Lammerts, Hacking Manager at Hadrian
During a Google Cloud event, it was claimed that Chinese hackers have been the top state-sponsored threat actors in zero-day usage over the past three years, being responsible for most of the exploited zero-days in 2023.
Melvin Lammerts, Hacking Manager at Hadrian
These zero-days often involve regression, bypasses, or loose connections to previous Common Vulnerabilities and Exposures (CVEs), allowing organizations to proactively defend themselves to some extent. Recommendations for mitigation include prompt patching and logging, monitoring CVE feeds for relevant technologies, and products used by organizations, and employing proper firewall usage to minimize exposure.
Accelerating Momentum of Supply Chain Attacks
Modern applications used by businesses, governments, and individuals heavily rely on the software supply chain. This digitization of the supply chain, however, exposes these areas to heightened vulnerabilities from cyberattacks. Notably, the integration of digital elements into both product delivery and traditional supply chains necessitates a broadened assessment of security considerations.
Himanshu Patri, Hacker at Hadrian
Common types of software supply chain attacks can vary:
- Exploiting Vulnerabilities: Threat actors may use known or zero-day vulnerabilities to gain unauthorized access to crucial developer resources, accounts, or software within the software supply chain.
- Targeting Open Source Components: Cyberattacks focus on open-source software, including libraries and dependencies in global projects. Tampering with these projects can lead to the widespread distribution of malicious code.
- Typosquatting: Threat actors upload subtly misspelled malicious packages to code repositories, impersonating popular packages and tricking developers into downloading malware.
- Stolen Credentials: Attackers use stolen usernames and passwords to access developer environments or software supply chain resources, allowing covert data theft or manipulation of code.
- Compromising CI/CD Pipelines: Continuous integration (CI) and continuous delivery (CD) pipelines, crucial for DevOps and software supply chains, can be breached by attackers, potentially compromising the entire software supply chain.
Himanshu Patri, Hacker at Hadrian
A pivotal tool in enhancing software supply chain security is the Software Bill of Materials (SBOM), which serves as an inventory of all components and software dependencies linked to a specific application. Covering both proprietary and open-source components, SBOMs provide comprehensive transparency, including the origins and licensing details of components.
The transparency provided by SBOMs empowers organizations to comply with legal requirements and proactively address vulnerabilities. By breaking down applications into components, SBOMs facilitate systematic vulnerability checks, enabling companies to implement safeguards before potential cyberattacks.
The increasing recognition of SBOMs as a valuable tool is evident, in a survey 42% of respondents were already utilizing them, and an additional 31% were planning adoption in the near future, signaling impressive growth in their use.
Given the far-reaching impact of supply chain attacks, the most effective approach is to incorporate security measures during vendor selection. Furthermore, securing the software supply chain is a crucial responsibility for developers, and testing should be carried out pre- and post-release of code to production.
Newsletter sign up
Never miss a beat
Subscribe to our newsletter for the latest security insights and tips.