Cybercriminals experiment with AI on the dark web

25/01/2024
207

Kaspersky's Digital Footprint Intelligence team has found nearly 3,000 posts on the dark web mainly about using ChatGPT and other LLMs for illegal activities. Threat actors are exploring the possibilities from Cybercriminals experiment, from creating nefarious alternatives of the chatbot to jailbreaking the original and more. Stolen ChatGPT accounts and services offering their automated creation en masse are also flooding dark web channels, reaching another 3,000 posts.

In 2023, experts from the Kaspersky Digital Footprint Intelligence team discovered nearly 3,000 posts on the dark web discussing the use of ChatGPT for illegal purposes or discussing tools using AI technologies. Although discussions peaked last March, they continue to persist.

The dynamics of discussions on the dark web in 2023 about the use of ChatGPT or other AI tools. Source: Kaspersky Digital Footrpint Intelligence

"Threat actors are actively exploring different strategies to implement ChatGPT and AI. These often involve malware development and other forms of illegal use of language models, such as processing stolen user data, analysing files from infected devices and more. The popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents in some cybercriminal forums. In addition, threat actors tend to share jailbreaks through various dark web channels. These are special sets of prompts that can unlock additional functionality. They also devise ways to abuse legitimate tools, such as those for pentesting, based on models for malicious purposes," explained Alisa Kulishenko, digital footprint analyst at Kaspersky.

Besides the aforementioned chatbot and artificial intelligence, much attention is being paid to projects such as XXXGPT, FraudGPT and others. These language models are marketed on the dark web as alternatives to ChatGPT, with additional functionality and the absence of original limitations.

Stolen ChatGPT accounts for sale
Another threat to users and businesses is the market for accounts for the paid version of ChatGPT. In 2023, another 3,000 posts were identified on the dark web and shadow Telegram channels offering ChatGPT accounts for sale (in addition to those mentioned earlier). These posts distributed stolen accounts or promoted auto-registration services that create mass accounts on demand. Certain posts were repeatedly published on multiple dark web channels.

The dynamics of dark web posts in 2023 offering stolen ChatGPT accounts or self-registration services. Source: Kaspersky Digital Footrpint Intelligence

"While AI tools themselves are not inherently dangerous, cybercriminals are trying to devise efficient ways to use language models. This sets a trend of lowering the threshold of entry into cybercrime and, in some cases, potentially increasing the number of cyberattacks. However, generative AI and chatbots are unlikely to revolutionise the attack landscape - at least by 2024. The mode of automated cyber attacks often also means automated defences. Nevertheless, staying abreast of attackers' activities is crucial to stay ahead of hostile parties in the field of enterprise cybersecurity," said Alisa Kulishenko, digital footprint analyst at Kaspersky.

Recent

Caseware Welcomes Danielle Supkis Cheek as Vice President, Head of Analytics and AI

'On paper, engineering education cannot inspire'

'State actors employ increasingly sophisticated cyber attacks'

Several vulnerabilities in Microsoft Office

© Dutch Tech On Heels - 2024
Made with
Web Wings