According to a recent report by Check Point Research, hackers have already begun using the AI-powered chatbot, ChatGPT, to create new low-level cyber tools including malware and encryption scripts. Security experts have been warning that OpenAI’s ChatGPT tool could be used to speed up cyber attacks, and it appears those warnings have come to fruition. The report details three instances where hackers discussed ways to use ChatGPT to write malware, create data encryption tools, and create new dark web marketplaces.
Hackers are always looking for ways to save time and speed up their attacks, and ChatGPT’s AI-driven responses tend to provide a good starting spot for most hackers writing malware and phishing emails. According to the report, the hackers have so far only created basic data stealing and encryption tools. One member noted in the forums that OpenAI’s tool gave him a “nice [helping] hand to finish the script with a nice scope.” Another “tech-oriented” hacker was also spotted teaching “less technically capable cybercriminals how to utilize ChatGPT for malicious purposes.”
The report also notes that the data encryption tool created could easily be turned into ransomware once a few minor problems were fixed. While it is too early to say how much cybercriminals will rely on ChatGPT in the long run or for how long they will be able to abuse the platform, it is clear that the technology has already been used for malicious purposes.
OpenAI has stated that ChatGPT is a research preview, and the organization is constantly looking for ways to improve the product to avoid abuse. It is important for companies and individuals to be aware of the potential misuse of AI-powered tools like ChatGPT, and to take steps to protect themselves from cyberattacks.