Burglars 2.0: AI as a New Weapon of Cybercriminals

Cybercrime is constantly evolving, and the latest tools used by criminals are those based on artificial intelligence. It seems that the era of AI-driven attacks has just arrived. What consequences does this change bring? Here are the latest reports from the cybersecurity frontline.

Artificial Intelligence (AI) is increasingly becoming a tool in the hands of cybercriminals, as shown by a recent report by the network security company, SlashNext. The company’s report, published on Tuesday, raises the alarm about the rise of cybercrimes using AI, thanks to new tools available on the dark side of the internet, also known as the darkweb [https://www.techopedia.com/definition/31594/darkweb].

In addition to the recently discovered WormGPT, an innovative AI tool that has gained a lot of attention, other advanced programs, such as FraudGPT, have also appeared on the darkweb. The latter was designed to generate phishing pages, write malicious code, create hacking tools, and draft fraud letters.

Researchers from SlashNext contacted a person using the pseudonym CanadianKingpin12 via the Telegram messenger [https://telegram.org/], to learn more about FraudGPT. As a result of these conversations, they discovered that cybercriminals are working on new AI chatbots – DarkBart and DarkBert. Both are expected to have internet access and the ability to integrate with Google’s image recognition technology, Google Lens [https://lens.google.com/], which would allow them to send both texts and images.

The Canadian pseudonymous hacker revealed to the researchers that DarkBert, initially designed by the data analysis company S2W as a tool to fight cybercrime, is now being used to commit it. DarkBert is supposed to assist in advanced social engineering attacks, exploiting loopholes in computer systems and spreading other malicious software, including ransomware.

Defending against rapidly developing AI-based cybercrime tools is a major challenge for companies. SlashNext recommends companies to actively train in cybersecurity and implement enhanced email verification measures.

A report by Immunefi, a company specializing in network security, points out the difficulties cybersecurity experts have in using AI to combat cybercrime. cyberbezpieczeństwa w wykorzystaniu AI do zwalczania cyberprzestępstw. Most of the surveyed experts admitted that AI offers “limited accuracy” in identifying potential threats, and 61% stated that it lacks specialized knowledge.

The impact that new AI capabilities can have in the hands of cybercriminals cannot be underestimated. As SlashNext emphasizes, “the quick transition from WormGPT to FraudGPT, and now DarkBert within just a month, underlines the significant impact of malicious AI on the landscape of cybersecurity and cybercrime.

Leave a Reply

Your email address will not be published. Required fields are marked *