FBI Reveals Explosive View of Open Source AI’s Devastating Impact on Hacking! You will not believe your eyes !

The impact of open source AI on hacking: the FBI’s point of view

Hackers are increasingly using artificial intelligence (AI) to enhance their criminal activities. According to the FBI, they notably exploit open source AI models to trap Internet users.

Use of AI by cybercriminals

Cybercriminals use chatbots based on language models such as ChatGPT, Google Bard or Claude to facilitate their malicious activities. They manage to bypass security measures thanks to their expertise in the field.

The FBI has warned of the massive use of language patterns by the criminal community. However, he noticed that the AI ​​models most popular with Internet users are not the favorites of hackers.

Open source models, a tool favored by hackers

Hackers prefer to use free, customizable open source AI models rather than those controlled by companies. These models, accessible to everyone on the internet, can easily be used to generate illicit content.

It is also interesting to note that criminals use custom AI models developed by other hackers. On the dark web, there are many chatbots designed to generate illegal content.

The different uses of AI by cybercriminals

Hackers use AI to design phishing pages that imitate the interface of official platforms. They also exploit the capabilities of generative AI to create polymorphic viruses.

Additionally, they use deepfake technology to extort money from their victims. They generate fake images and videos to harass their victims.

The future of AI and hacking

The FBI predicts an increase in the criminal use of AI as the technology becomes more widely available. It is therefore essential to develop prevention and protection strategies to counter the malicious use of AI by hackers.

It is imperative to ensure responsible and ethical use of AI, while securing open source AI models and strengthening security measures to prevent manipulation.

Source: PCMag

Explosion of cybercrime: artificial intelligence in danger! Discover the latest threats and alarming prospects

Cybercriminals and the exploitation of artificial intelligence

Cybercriminals are increasingly using artificial intelligence (AI) to improve their attacks and make them more credible and effective. This use of generative AI, popularized by ChatGPT, is spreading in the world of cybercrime. Now phishing, ransomware, scams and even presidential scams benefit from these new tools.

A democratization of AI among cybercriminals

The democratization of AI among cybercriminals makes them more efficient and credible. Jean-Jacques Latour, director of cybersecurity expertise at Cybermalveillance.gouv.fr, emphasizes that the methods used by criminals remain the same, but the volume and strength of attacks are increasing considerably.

More sophisticated phishing attacks

Phishing, which comes in the form of emails promising free gifts or discounts, is becoming increasingly sophisticated. Scammers adapt their language to convince users to click on questionable links or sites, avoiding gross syntax or spelling errors.

Generative AI to create personalized malware

Generative AI can be hijacked to create custom malware, exploiting known vulnerabilities in computer programs. Programs such as ThreatGPT, WormGPT, and FraudGPT are growing on the Darknet and gaining popularity among malicious actors.

Mass exploitation of data by hackers

Hackers also use AI to sort and exploit large amounts of data once they have infiltrated a computer system. This allows them to maximize their profits by targeting the most relevant information.

The Presidential Scam and Deepfake Audio Generators

AI is also being used in the presidential scam, where hackers collect information on company executives to make fraudulent transfers. Thanks to “deepfake” audio generators, they can perfectly imitate the voices of managers and give transfer orders.

Ransomware and vishing

Ransomware already uses AI to modify its code and evade detection by security tools. Additionally, the technique of vishing, where a fake banker requests a money transfer, could also be improved using AI.

The use of synthetic content generated by AI

British police have previously reported cases where synthetic AI-generated content has been used to deceive, harass or extort victims. Although the first cases in France have not been officially recorded, doubts remain about the use of AI by criminals.

The “zero trust” rule in cybersecurity

Faced with these new threats, it is essential to apply the rule of “zero trust” in matters of cybersecurity and AI. We must not trust any a priori element and put in place adequate protection measures to counter these threats. The most active hackers are generally well-organized networks from Eastern Europe, but state hackers from fringe countries should not be overlooked.

Conclusion

AI-powered cybercrime poses a growing threat. Cybercriminals are increasingly using AI to improve their techniques and carry out more credible attacks. It is essential to remain vigilant and put appropriate protective measures in place to counter these threats.

Discover the Dark Web and immerse yourself in the incredible world of hackers and unbridled AI!

The fascinating world of hackers and unbridled artificial intelligence on the Dark Web

Hackers are increasingly exploiting generative artificial intelligence (AI) to carry out their criminal activities. A Kaspersky Dark Web investigation found that the use of AI, particularly generative AI tools, has become common and concerning.

Thousands of discussions on the use of AI for illegal purposes

Kaspersky Digital Footprint Intelligence, a Russian cybersecurity company, analyzed the Dark Web to identify discussions about the use of AI by hackers. Researchers found thousands of conversations discussing the use of AI for illegal and malicious purposes.

During the year 2023, no less than 3,000 discussions were recorded, with a peak in March. Although these discussions tend to diminish over the year, they remain present and active on the Dark Web.

AI at the service of cybercriminals

These discussions mainly revolve around malware development and illegal use of language models. Hackers are exploring avenues such as processing stolen data, analyzing files from infected devices, and many others.

These exchanges demonstrate the growing interest of hackers in AI and their desire to exploit its technical possibilities in order to carry out criminal activities more effectively.

Selling Stolen ChatGPT Accounts and Jailbreaks on the Dark Web

Besides discussions about the use of AI, the Dark Web is also a thriving market for the sale of stolen ChatGPT accounts. Kaspersky has identified more than 3,000 ads selling paid ChatGPT accounts.

Hackers also offer automatic registration services to massively create accounts on demand. These services are distributed over secure channels like Telegram.

Additionally, researchers have seen an increase in the sale of jailbroken chatbots such as WormGPT, FraudGPT, XXXGPT, WolfGPT, and EvilGPT. These malicious versions of ChatGPT are free from limitations, uncensored and loaded with additional features.

A growing threat to cybersecurity

The use of AI by hackers represents a growing threat to cybersecurity. Language models can be exploited maliciously, increasing the potential number of cyberattacks.

It is therefore essential to strengthen cybersecurity measures to counter these new forms of AI-based attacks. Experts must remain vigilant in the face of these constant developments and work to develop effective strategies to counter cybercriminals.

Watch out for a terrifying iOS malware attack that uses AI to steal faces and hack biometrics!

New threat: iOS malware steals faces to bypass biometrics with AI exchanges

A group of Chinese hackers has developed a new malware called “GoldPickaxe” that poses a threat to the security of iOS mobile device users. This software uses artificial intelligence exchanges to bypass biometrics and steal faces, personal identifiers and phone numbers. Cybercriminals can then use this information to access victims’ bank accounts.

A sophisticated attack

Group-IB researchers have identified at least one victim of this attack, a Vietnamese citizen who lost approximately $40,000 as a result of this deception. What makes this attack special is the use of deepfakes, manipulated videos that can fool the biometric security systems of Southeast Asian banks. The malware masquerades as a government application and primarily targets elderly people. Victims are encouraged to scan their faces, which allows hackers to generate deepfakes from these scans.

The challenge of biometric authentication

This attack highlights the fact that deepfake technologies have reached an advanced level and are capable of bypassing biometric authentication mechanisms. Criminals exploit this weakness and take advantage of the fact that most users are unaware of this threat. Andrew Newell, scientific director at iProov, explains that deepfakes are a tool of choice for hackers because they give them incredible power and control.

How hackers bypass Thai banks

The Bank of Thailand has implemented a policy to combat financial fraud by requiring facial recognition for all important customer actions. However, the GoldPickaxe malware quickly bypassed this security measure. This malware, which appeared three months after the bank’s policy was implemented, presents itself as an application called “Digital Pension” used by elderly people to receive their pension in digital format. Victims are encouraged to scan their face, upload their government ID card and submit their phone number. Unlike other banking malware, GoldPickaxe does not operate on top of a real financial application, but instead collects all the necessary information to bypass authentication checks and manually log into victims’ bank accounts.

Fight against biometric banking trojans

Attacks like this show the need for rapid evolution in the banking industry to deal with growing threats. Banks must implement more advanced security measures, adapted to new technological challenges. Banks are recommended to implement sophisticated monitoring of user sessions and customers to adopt good security practices, including avoiding clicking on suspicious links, verifying the authenticity of banking communications and promptly contacting their bank in the event of suspected fraud.