Introducing WormGPT: The Malicious Chatbot for Online Criminals

Pictured - a man at a computer disguised as an anonymous hacker wearing a Guy Fawkes mask.

A.I

SlashNext, an email security service, has reported the emergence of a dangerous chatbot called WormGPT. This malicious version of ChatGPT, created to assist online criminals, is being sold on a well-known hacker forum. Unlike other generative AI tools like ChatGPT or Google’s Bard, WormGPT lacks safeguards to prevent it from responding to malicious queries.

WormGPT was initially presented by the hacker in March and released last month. It is capable of replying to queries that contain malicious content, making it a potential tool for cybercriminals. The development of AI technologies like OpenAI’s ChatGPT has provided new vectors for cyberattacks, particularly in the form of business email compromise (BEC) assaults. By automating the creation of convincing false emails personalized for the recipient, cybercriminals can significantly increase the success rate of their attacks.

The developer of WormGPT has explicitly stated their intention to create an alternative to ChatGPT that allows for illegal activities and can be easily sold online. SlashNext has confirmed that the developer is indeed selling the program on an online forum. The creator has even shared photos demonstrating how to instruct the chatbot to create malware with Python code and seek advice on designing dangerous assaults. WormGPT was developed using the open-source GPT-J language model and trained on information related to malware production.

Accessing WormGPT requires creating an account on the forum and following the provided guide. However, it is important to note that engaging in illegal activities using WormGPT is strictly prohibited. The dangers associated with generative AI, including the potential for producing phishing emails, fake news stories, and malware, are significant. Blackhat hackers can exploit generative AI as a tool to execute harmful assaults and infiltrate computer systems to collect personal data.

Defending against malicious generative AI tools requires a multi-faceted approach. Organizations should implement comprehensive and regularly updated training programs to educate employees about the risks associated with BEC assaults, particularly those involving AI. Enhanced email verification measures, such as automatically detecting emails that mimic internal leaders or vendors and flagging communications with specific phrases related to BEC assaults, can help protect against AI-driven attacks.