Explosion of cybercrime: artificial intelligence in danger! Discover the latest threats and alarming prospects

Cybercriminals and the exploitation of artificial intelligence

Cybercriminals are increasingly using artificial intelligence (AI) to improve their attacks and make them more credible and effective. This use of generative AI, popularized by ChatGPT, is spreading in the world of cybercrime. Now phishing, ransomware, scams and even presidential scams benefit from these new tools.

A democratization of AI among cybercriminals

The democratization of AI among cybercriminals makes them more efficient and credible. Jean-Jacques Latour, director of cybersecurity expertise at Cybermalveillance.gouv.fr, emphasizes that the methods used by criminals remain the same, but the volume and strength of attacks are increasing considerably.

More sophisticated phishing attacks

Phishing, which comes in the form of emails promising free gifts or discounts, is becoming increasingly sophisticated. Scammers adapt their language to convince users to click on questionable links or sites, avoiding gross syntax or spelling errors.

Generative AI to create personalized malware

Generative AI can be hijacked to create custom malware, exploiting known vulnerabilities in computer programs. Programs such as ThreatGPT, WormGPT, and FraudGPT are growing on the Darknet and gaining popularity among malicious actors.

Mass exploitation of data by hackers

Hackers also use AI to sort and exploit large amounts of data once they have infiltrated a computer system. This allows them to maximize their profits by targeting the most relevant information.

The Presidential Scam and Deepfake Audio Generators

AI is also being used in the presidential scam, where hackers collect information on company executives to make fraudulent transfers. Thanks to “deepfake” audio generators, they can perfectly imitate the voices of managers and give transfer orders.

Ransomware and vishing

Ransomware already uses AI to modify its code and evade detection by security tools. Additionally, the technique of vishing, where a fake banker requests a money transfer, could also be improved using AI.

The use of synthetic content generated by AI

British police have previously reported cases where synthetic AI-generated content has been used to deceive, harass or extort victims. Although the first cases in France have not been officially recorded, doubts remain about the use of AI by criminals.

The “zero trust” rule in cybersecurity

Faced with these new threats, it is essential to apply the rule of “zero trust” in matters of cybersecurity and AI. We must not trust any a priori element and put in place adequate protection measures to counter these threats. The most active hackers are generally well-organized networks from Eastern Europe, but state hackers from fringe countries should not be overlooked.

Conclusion

AI-powered cybercrime poses a growing threat. Cybercriminals are increasingly using AI to improve their techniques and carry out more credible attacks. It is essential to remain vigilant and put appropriate protective measures in place to counter these threats.

Discover the Dark Web and immerse yourself in the incredible world of hackers and unbridled AI!

The fascinating world of hackers and unbridled artificial intelligence on the Dark Web

Hackers are increasingly exploiting generative artificial intelligence (AI) to carry out their criminal activities. A Kaspersky Dark Web investigation found that the use of AI, particularly generative AI tools, has become common and concerning.

Thousands of discussions on the use of AI for illegal purposes

Kaspersky Digital Footprint Intelligence, a Russian cybersecurity company, analyzed the Dark Web to identify discussions about the use of AI by hackers. Researchers found thousands of conversations discussing the use of AI for illegal and malicious purposes.

During the year 2023, no less than 3,000 discussions were recorded, with a peak in March. Although these discussions tend to diminish over the year, they remain present and active on the Dark Web.

AI at the service of cybercriminals

These discussions mainly revolve around malware development and illegal use of language models. Hackers are exploring avenues such as processing stolen data, analyzing files from infected devices, and many others.

These exchanges demonstrate the growing interest of hackers in AI and their desire to exploit its technical possibilities in order to carry out criminal activities more effectively.

Selling Stolen ChatGPT Accounts and Jailbreaks on the Dark Web

Besides discussions about the use of AI, the Dark Web is also a thriving market for the sale of stolen ChatGPT accounts. Kaspersky has identified more than 3,000 ads selling paid ChatGPT accounts.

Hackers also offer automatic registration services to massively create accounts on demand. These services are distributed over secure channels like Telegram.

Additionally, researchers have seen an increase in the sale of jailbroken chatbots such as WormGPT, FraudGPT, XXXGPT, WolfGPT, and EvilGPT. These malicious versions of ChatGPT are free from limitations, uncensored and loaded with additional features.

A growing threat to cybersecurity

The use of AI by hackers represents a growing threat to cybersecurity. Language models can be exploited maliciously, increasing the potential number of cyberattacks.

It is therefore essential to strengthen cybersecurity measures to counter these new forms of AI-based attacks. Experts must remain vigilant in the face of these constant developments and work to develop effective strategies to counter cybercriminals.