Search
Close this search box.

Leaders Opinion: After WormGPT, FraudGPT Makes it Easier for CybercriminalsLeaders Opinion

The emergence of these AI-powered tools poses a significant threat to enterprises.

In the dark corners of the internet, a disturbing development has recently come to light, thanks to the efforts of Netenrich security researcher Rakesh Krishnan. It’s a model known as FraudGPT, and it has been circulating on darknet forums and Telegram channels since July 22, 2023. What sets this model apart is that it’s available through a subscription pricing model: $200 per month, $1,000 for six months, or $1,700 for a year.

Aswin Sreenivas, Head of Data Science/Business Intelligence/Customer Insights at StarHub, offers a thought-provoking insight into this disturbing development: “The evolution of Generative AI has led to concerns over criminal use, exemplified by WormGPT and FraudGPT. Born from GPT-J, WormGPT creates malware without limitations, while FraudGPT crafts undetectable malware and malicious content.”

“The emergence of these AI-powered tools poses a significant threat to enterprises,” Aswin Sreenivas warns. “Many companies have been slow to adopt Generative AI due to concerns about security infrastructure. Educating the workforce about these threats is crucial, as is developing robust cybersecurity measures to safeguard against data breaches and other cyber threats.”

Fraud GPT’s capabilities are truly alarming. They include generating malicious code to exploit system vulnerabilities, creating undetectable malware, identifying Non-Verified by Visa (Non-VBV) bins for unauthorized transactions, crafting convincing phishing pages, locating hidden hacker groups and black markets, generating scam content, finding data leaks, and aiding in learning coding and hacking techniques.

To quote Aswin Sreenivas again, “Additionally, it assists in identifying cardable sites for fraudulent credit card transactions. WormGPT, another tool launched in July 2023, specializes in crafting convincing fake emails for business email compromise (BEC) attacks, bypassing spam filters.”

The rapid advancement of AI models has made it challenging for security experts to combat automated machine-generated outputs, providing cybercriminals with more efficient ways to defraud and target victims. While detection tools for AI-generated text exist, their effectiveness has been questioned, and the cybersecurity landscape remains challenging to navigate.

As Aswin Sreenivas emphasizes, “Proactive measures encompassing technology, collaboration, education, and ethics are imperative to ensure responsible AI advancement and curb potential misuse.” The growing interest in AI within the underground community amplifies the need for vigilance. While current capabilities may not be groundbreaking, these models signify a concerning step towards AI weaponization.

Picture of AIM Research
AIM Research
AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
MachineCon 2024
Meet 100 Most Influential AI Leaders in USA
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!

Cutting Edge Analysis and Trends for USA's AI Industry

Subscribe to our Newsletter