How Generative AI is Shaping Cybercrime
Generative AI is not only reshaping the way we work - it is changing the entire world - and its darker implications are increasingly being exploited by cybercriminals. Their targets range from unsuspecting families to multinational businesses.
In one unsettling recent case, a mother heard the supposed voice of her 15-year-old daughter crying on the phone. The daughter was pleading with her mother to help her, with kidnappers demanding a $1 million ransom. Except she hadn’t been kidnapped at all. She was completely safe on a ski trip, but cyberattackers were using AI to mimic her voice in an effort to extort money from her mother.
In another alarming case from 2019, the CEO of a UK-based energy firm was conned into transferring €220,000, believing he was on the phone with the director of the parent company. The culprit? Again, AI-powered voice manipulation.
AI attacks on the rise
According to the FBI, the number of AI-enabled voice-spoofing incidents is not insignificant. In fact, a wide variety of AI-assisted cybercrimes are being reported in increasing numbers. These include business email compromise (BEC) attacks, sophisticated spear phishing, and sextortion cases where pictures on social media are used to generate explicit material of victims. In the latter example, the FBI has seen a clear uptick throughout 2023, with victims typically being sent financial demands under the threat that AI-generated explicit images will be shared with friends or family members.
Multiple attack methods
Among the numerous emerging threats leveraging AI is WormGPT, a model based on an open-source ChatGPT equivalent called GPT-J. The exact training data used by the model has not been disclosed but allegedly consists of malware-related content. The tool, which comes at a subscription price of €60 per month, is frequently discussed in online hacker forums and was tested earlier this year by a journalist who showcased its use in drafting well-written and personalized phishing emails.
However, WormGPT is just the tip of the iceberg when it comes to publicly available hacking-focused AI. Researchers at cloud data analytics platform Netenrich, for instance, recently uncovered "FraudGPT", a malicious AI bot available on the dark web and Telegram, tailored for cyberattacks, such as spear phishing and malware creation. In better news, however, a large language model named DarkBERT has recently been trained on data from the dark web by researchers aiming to fight cybercrime. Unfortunately, this model has now allegedly been accessed by threat actors. AI is a new weapon in a constant battle between defense and attack.
Many cybercriminals leverage generative AI without paying a premium for cybercrime-specific models, of course. By using curated prompts that can be found freely online, hackers have found ways to jailbreak ChatGPT to generate content that violates OpenAI’s guidelines. An online community on reddit called “ChatGPTJailbreak” has over 15k subscribers and 1000s of posts discussing how to break the boundaries of ChatGPT.
Another resource, https://www.jailbreakchat.com/ has 79 different jailbreak prompts. These relatively primitive crowd-sourced references are being quickly replaced by advanced gradient-based methods that create jailbreak prompts by machine learning methods, as recently displayed in research published by Carnegie Mellon University. The prompts that the researchers developed were found to be generally transferable and were effective on many chatbots, including ChatGPT, Bard, and Claude.
These tools not only allow hackers to generate believable voices, email messages, and images but also scripts that can be used to penetrate an enterprise’s defenses. Research by GitHub shows that 92% of developers are currently using generative AI during their daily activities and 70% believe it has increased their productivity. Given hackers’ track records for resourcefulness, we shouldn’t be surprised if these figures are even higher for them. Generative AI lowers the barrier to entry for unskilled individuals to write advanced scripts that illegally exploit computer systems, broadening the number of people who can engage in criminal activities online.
AI defense against AI attack
To defend a company’s attack surface in a world with lower barriers to cybercrime requires constant monitoring of potential attack vectors. However, there is hope. Just as AI is posing a heightened threat, it can also be deployed to shore up network defenses.
A solution like Hadrian’s shows the potential of AI-driven cybersecurity tools to empower ethical hacking. Hadrian is AI-first and uses machine learning to find creative exploits the moment a vulnerability appears - before black-hat hackers have a chance to abuse it.
Just because AI is changing the threat landscape, doesn’t mean the cybercriminals are certain to win. By using Hadrian, you can fight fire with fire by deploying AI for the good guys. Repurpose offensive AI to strengthen your defenses today.