Explosion of cybercrime: artificial intelligence in danger! Discover the latest threats and alarming prospects

Cybercriminals and the exploitation of artificial intelligence

Cybercriminals are increasingly using artificial intelligence (AI) to improve their attacks and make them more credible and effective. This use of generative AI, popularized by ChatGPT, is spreading in the world of cybercrime. Now phishing, ransomware, scams and even presidential scams benefit from these new tools.

A democratization of AI among cybercriminals

The democratization of AI among cybercriminals makes them more efficient and credible. Jean-Jacques Latour, director of cybersecurity expertise at Cybermalveillance.gouv.fr, emphasizes that the methods used by criminals remain the same, but the volume and strength of attacks are increasing considerably.

More sophisticated phishing attacks

Phishing, which comes in the form of emails promising free gifts or discounts, is becoming increasingly sophisticated. Scammers adapt their language to convince users to click on questionable links or sites, avoiding gross syntax or spelling errors.

Generative AI to create personalized malware

Generative AI can be hijacked to create custom malware, exploiting known vulnerabilities in computer programs. Programs such as ThreatGPT, WormGPT, and FraudGPT are growing on the Darknet and gaining popularity among malicious actors.

Mass exploitation of data by hackers

Hackers also use AI to sort and exploit large amounts of data once they have infiltrated a computer system. This allows them to maximize their profits by targeting the most relevant information.

The Presidential Scam and Deepfake Audio Generators

AI is also being used in the presidential scam, where hackers collect information on company executives to make fraudulent transfers. Thanks to “deepfake” audio generators, they can perfectly imitate the voices of managers and give transfer orders.

Ransomware and vishing

Ransomware already uses AI to modify its code and evade detection by security tools. Additionally, the technique of vishing, where a fake banker requests a money transfer, could also be improved using AI.

The use of synthetic content generated by AI

British police have previously reported cases where synthetic AI-generated content has been used to deceive, harass or extort victims. Although the first cases in France have not been officially recorded, doubts remain about the use of AI by criminals.

The “zero trust” rule in cybersecurity

Faced with these new threats, it is essential to apply the rule of “zero trust” in matters of cybersecurity and AI. We must not trust any a priori element and put in place adequate protection measures to counter these threats. The most active hackers are generally well-organized networks from Eastern Europe, but state hackers from fringe countries should not be overlooked.


AI-powered cybercrime poses a growing threat. Cybercriminals are increasingly using AI to improve their techniques and carry out more credible attacks. It is essential to remain vigilant and put appropriate protective measures in place to counter these threats.