Malicious AI Tools: FraudGPT and WormGPT

Explore top LinkedIn content from expert professionals.

  • View profile for Madu Ratnayake

    President, Scybers, Global Cybersecurity Firm | ex-Global CIO | SOC, Cloud & AI Security | Board Advisor | NED | Founder President TiE CMB

    17,493 followers

    Generative AI capabilities are getting rapidly adopted by cyber criminals for more sophisticated attacks The developer behind the FraudGPT malicious chatbot is readying even more sophisticated adversarial tools based on generative AI and Google's Bard technology — one of which will leverage a large language model (LLM) that uses as its knowledge base the entirety of the Dark Web itself. The forthcoming bots — dubbed DarkBART and DarkBERT — will arm threat actors with ChatGPT-like AI capabilities that go much further than existing cybercriminal genAI offerings, according to SlashNext. In a blog post published Aug. 1, the firm warned that the AIs will potentially lower the barrier of entry for would-be cybercriminals to develop sophisticated business email compromise (BEC) phishing campaigns, find and exploit zero-day vulnerabilities, probe for critical infrastructure weaknesses, create and distribute malware, and much more. "The rapid progression from WormGPT to FraudGPT and now 'DarkBERT' in under a month underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape," SlashNext researcher Daniel Kelley wrote.

  • View profile for Kayne McGladrey, CISSP

    Former CISO in Residence at Hyperproof – now focusing on executive advisory, consulting, and cybersecurity.

    12,710 followers

    Generative AI and the Emergence of Unethical Models: Examining WormGPT It is surprising that it has taken malware developers this long to create an unethical GPT model. Enter WormGPT, a rogue variant of the GPTJ language model that brings the formidable power of generative AI into the threat actor supply chain, significantly increasing the risk of business email compromise (BEC) attacks. WormGPT Overview: WormGPT is a tool for malicious activities, harnessing AI technology. It has several unique capabilities, including unlimited character support, chat memory retention, and code formatting. Although specifics regarding its training datasets, which predominantly revolve around malware, remain undisclosed. Experiment Findings: Controlled experiments were conducted to evaluate WormGPT's potential for harm. In one such experiment, it was tasked with creating a manipulative email to deceive an account manager into paying a fraudulent invoice. The results were predictably alarming. Findings: The AI crafted a deceptive email with striking persuasive power, showcasing its capacity to orchestrate complex phishing and BEC attacks. These findings offer a reflection of the capabilities of generative AI, resembling ChatGPT but devoid of ethical boundaries. The experiment underscores a long-speculated concern—the threat that generative AI tools could pose, even in the hands of inexperienced threat actors. The Potential of Generative AI for BEC Attacks: Generative AI excels at creating near-perfect grammar, enhancing the perceived authenticity of deceptive emails. Furthermore, it lowers the entry threshold, making sophisticated BEC attacks accessible to less skilled threat actors. As expected, the evolving landscape of cybersecurity brings new complexities and demands fortified defenses against these advanced threats. The logical progression leads to the use of AI as a defense against AI. By leveraging AI to counter these AI-orchestrated threats, defenses can potentially outpace and block them before they even launch. Synthetic data generated from core threats and their variants can aid in bolstering defenses against an impending wave of similar attacks. Organizations will increasingly rely on AI tools to discover, detect, and resolve these sophisticated threats. As this reality unfolds, it becomes clear that the question was not if, but when. The road ahead demands both adaptability and tenacity. #cybersecurity #chatGPT

  • View profile for Richard Staynings

    Keynote Speaker, Cybersecurity Luminary, Evangelist, Thought Leader, Advocate, and Board Member

    25,910 followers

    Cybercriminals have developed a generative AI tool called WormGPT designed to help grammatically challenged criminals craft convincing business email compromise (BEC) missives. The crimeware tool has been in development since 2021, but starting last month it is now being promoted on illicit online forums. A report released Thursday by cybersecurity firm SlashNext said WormGPT is being distributed as a subscription-based generative AI tool. The report’s author, Daniel Kelley, a self-described "reformed black hat" said criminals promoting the tool boast it is a limitless alternative to OpenAI’s popular ChatGPT service. The striking difference is WormGPT is designed for "black hat" hackers with only bad intent. Public generative AI tools like OpenAI's popular ChatGPT that launched last year have implemented some safeguards to keep their products from being used for nefarious means, such as BEC scams. WormGPT promoters claim their product have zero ethical constraints and can spit out AI-created BEC content for urgently soliciting funds from targeted victims and also whip up customizable malware code. "In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations," Kelley wrote. Kelley’s hacking creds date back to his teens when he pleaded guilty in 2016 to multiple hacking offenses. https://lnkd.in/eQg-iNgN

Explore categories