Cyber criminals are using AI-driven ChatGPT to develop malicious tools that can steal user data. The first such instance of cybercriminals using ChatGPT to write malicious codes have been spotted by Check Point researchers.
Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. In underground hacking forums, threat actors are creating “infostealers”, encryption tools and facilitating fraud activity.
Sergey Shykevich, Threat Intelligence Group Manager at Check Point said, “Cybercriminals are finding ChatGPT attractive. In recent weeks, we*re seeing evidence of hackers starting to use it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point.”
Last month, a thread named “ChatGPT - Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.
Additionally, a threat actor posted a Python script, which he emphasized was the first script he ever created. When another cybercriminal commented that the style of the code resembles OpenAI code, the hacker confirmed that OpenAI gave him a “nice (helping) hand to finish the script with a nice scope,” meaning that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminal with technical capabilities.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.