
Cybercriminals are increasingly using the darknet to sell AI-powered tools. These tools can be used for a variety of malicious purposes, starting from Generating phishing emails that are more likely to fool victims to Creating malware that is more difficult to detect and remove. They can even develop social engineering attacks that are more persuasive, it can be possible by automating hacking tasks, such as password cracking and vulnerability scanning.
There are advanced generative AI Chatbots known as FraudGPT or WormGPT, claiming to be "bots without limitations, rules, and boundaries," to carry out their malevolent activities. These Chatbots, recently emerging on the dark web, are now available for sale to anyone seeking to create phishing emails, malware, or cracking tools. Cyberexperts have cautioned people against these Chatbots.
It has built on the popular ChatGPT-3 technology, these Chatbots can generate realistic and coherent texts based on user prompts. Hackers employ these tools to create deceptive emails, fooling unsuspecting victims into believing they have received official business communications or bank notices.
FraudGPT is able to write malicious code, create undetectable viruses or malware, find non-VBV bins, generate phishing pages and hacking tools to infiltrate groups, sites, and markets. It can also craft scam pages or letters, discover leaks, vulnerabilities, and even access active cards.
Much has been written about the potential for threat actors to use language models. With open source large language models (LLMs) such as LLaMA and Orca, and now the cybercrime model WormGPT, the trends around the commodification of cybercrime and the increasing capabilities of models are set to collide.
Recently, the Gujarat CID officials uncovered that FraudGPT vendors are well-known in underground dark web marketplaces like Empire, WHM, Torrez, Alphabay, and Versus. The access to FraudGPT Chatbot is sold over Telegram channel," added the CID official. Sources indicate that these Chatbots are available for a monthly subscription fee ranging from $200 to $1,700.
It allows threat actors to actively use them to identify 0-days, write spear-phishing emails, and perform other types of cybercrime without the need for jailbreaks. Moving ahead this is undoubtedly concerning, the statistic only begins to scratch the surface of threat actors' interests in ChatGPT, GPT-4, and AI language models more broadly.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.