Published on 03:50 PM, December 23, 2023

Scammers could use ChatGPT to conduct massive frauds: Sophos

"If an AI technology exists that can create complete, automated threats, people will eventually use it," said Ben Gelman, senior data scientist, at Sophos. Image: Jonathon Kemper/Unsplash

The cybersecurity company Sophos has recently released two reports about the use of AI in cybercrime. The first report, 'The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI' demonstrates how, in the future, scammers could leverage technology like ChatGPT to conduct fraud on a massive scale with minimal technical skills. However, the second report, titled 'Cybercriminals Can't Agree on GPTs', found that, despite AI's potential, rather than embracing large language models (LLMs) like ChatGPT, some cyber criminals are sceptical and even concerned about using AI for their attacks. Below is a brief breakdown of both the reports by Sophos.

The dark side of AI

Using a simple e-commerce template and LLM tools like GPT-4, Sophos X-Ops was able to build a fully functioning website with AI-generated images, audio, and product descriptions, as well as a fake Facebook login and fake checkout page to steal users' login credentials and credit card details. The website required minimal technical knowledge to create and operate, and, using the same tool, Sophos X-Ops was able to create hundreds of similar websites in minutes with one button.

"It's natural - and expected - for criminals to turn to new technology for automation. The original creation of spam emails was a critical step in scamming technology because it changed the scale of the playing field," said Ben Gelman, senior data scientist, at Sophos. Gelman added that new AIs are poised to do the same; if an AI technology exists that can create complete, automated threats, people will eventually use it. 

Gelman further added, "However, part of the reason we conducted this research was to get ahead of the criminals. By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyze and prepare for the threat before it proliferates."

Cybercriminals can't agree on GPTs

For its research into attacker attitudes towards AI, Sophos X-Ops examined four prominent dark web forums for LLM-related discussions. While cybercriminals' AI use appears to be in its early stages, threat actors on the dark web are discussing its potential when it comes to social engineering. Sophos X-Ops has already witnessed the use of AI in romance-based, crypto scams. 

In addition, Sophos X-Ops found that the majority of posts were related to compromised ChatGPT accounts for sale and 'jailbreaks' - ways to circumvent the protections built into LLMs so cybercriminals can abuse them for malicious purposes. Sophos X-Ops also found ten ChatGPT derivatives that the creators claimed could be used to launch cyber-attacks and develop malware. However, threat actors had mixed reactions to these derivatives and other malicious applications of LLMs, with many criminals expressing concern that the creators of the ChatGPT imitators were trying to scam them.