Is ChatGPT democratizing cybercrime?
I joined Hadrian as a HackOps Engineer in February last year, day-to-day I deploy the modules and write code for the hacking teams and ensure the reliability and consistency of those modules. In this industry the challenges are ever-changing so I stay up to date, but ChatGPT would have been hard to miss regardless!
ChatGPT utilizes NLP (Natural Language Processing) to absorb prompts and respond with an extremely confident answer in a conversational and natural style. Trawling through 10 pages of Google results for the perfect summary for an essay introduction seems to be a problem of the past student. And apparently, it codes too...
Since its launch in November last year, ChatGPT has seen an influx of positive press facilitated by its accessibility - anyone can access the magic within minutes. In the first week, over one million users signed up. The sensationalist headlines that followed it such as “Start a business using only AI” and “Will ChatGPT replace your job?” predictably created a massive buzz. And when it passed the US Medical Licensing Exam, it's status as a worldwide meme was confirmed. The rise of computers leading to mass job losses inevitably conjures up images of science fiction movies and pop culture references, even if I do find it quite likely.
But more recently, headlines have shifted to some more salient concerns, amongst which was that malicious actors were using its features to build malware and aid cyberattacks. And that criminals with little to no knowledge of programming were now able to create harmful code, did ChatGPT just accidentally democratize cybercrime?
I decided to test it out and review ChatGPT’s hacking potential.
Can ChatGPT write phishing emails?
While it’s true that ChatGPT can be used to create text for phishing emails, its added value really depends on what kind. For the widespread attacks the “spray and pray” kind that are sent to thousands of inboxes not so much, those are the emails that are riddled with typos. The method of attack here is to incite you to click and download something malicious, or maybe attempting a phishing or wire fraud attack.
But fun fact, the mistakes you see in this kind are actually very intentional. To weed out the ‘smarter’ people that won’t be potential victims. I suppose you could use ChatGPT in this instance to create a somewhat believable set of emails but the quality of the written content is not really the point of those types of attacks. But if we consider the second kind, targeted attacks, then the scammer can for example, input a series of Tweets from the person they wish to impersonate; this will ‘educate’ ChatGPT and messages it creates will be in their style. A somewhat generic message such as “Please complete this survey” from a hacker impersonating your colleague could be transformed into a very believable message that imitates their style of speech so accurately it may be hard to distinguish. This is where ChatGPT and other NLP models really shine.
Can ChatGPT create ransomware?
As for creating unique ransomware code, there is already a lot of ransomware available for purchase on the dark web, or even free on Github. When trialing ChatGPT, I find you still do need to know the right questions to ask - the ML model does have some weaknesses. And usually you learn what to ask by knowing a little bit about programming. If you have 0 technical knowledge there are faster ways to get your hands on ransomware than trying your luck with various prompts. Furthermore, the hard part about ransomware is not creating it - it’s spreading it. Finding a vulnerability or using a well known exploit (that is not patched on a server) is the hard part. ChatGPT does not provide help with executing an attack.
Has ChatGPT revolutionized cybercrime?
In conclusion, I believe that it is just another tool in a cyber criminal’s arsenal, similar to the early days of search engines. While there is buzz around its novelty, I don’t think OpenAI has revolutionized cybercrime, but it has made it a fraction easier. Even if the platform enforces stricter restrictions on certain questions or information, another platform with similar if not the same features will emerge. The biggest concern I see here is the ability of this platform to imitate human language patterns and colloquialisms. With the combination of NLP and other technologies like deep fakes, we could see very sophisticated attacks in the near future. It’s only a matter of time before actors using multiple tools becomes more common, and we need to be prepared for some elaborate scenarios.
I wish employers a lot of luck with educating their employees not to fall for those kinds of scams, it's a good time to be a cybersecurity consultant!
For some tips on how to educate your staff on cyber risks read our employee policy suggestions for managing your attack surface, read our blog post.