OpenAI has taken action to shut down several accounts associated with governmental entities from China, Russia, Iran, and North Korea. The decision was made due to concerns that these accounts were exploiting the company’s AI chatbot services for what OpenAI described as “malicious cyber activities.” This move was announced by the company through a blog post on Wednesday.
The decision stemmed from a collaborative effort between OpenAI and Microsoft Threat Intelligence. In its own blog post, Microsoft clarified that while it did not find evidence of significant cyberattacks conducted by these actors, the activities observed were indicative of an exploration of the potential uses of new technologies.
The state-linked groups were found to be utilizing OpenAI’s services for various purposes, including research into companies and intelligence agencies, translation, content generation for hacking campaigns, and simple coding tasks. Notably, OpenAI’s products, such as ChatGPT and Whisper, were found to align with these use cases.
Microsoft specifically identified accounts associated with Forest Blizzard, a Russian state-affiliated threat actor, which had been actively engaged in researching satellite communication protocols, radar technology, and coding tasks. Forest Blizzard, also known as Fancy Bear or APT28, has a history of targeting various entities globally, including involvement in the 2016 Democratic National Committee breach.
Similarly, two Chinese-affiliated threat actors named Charcoal Typhoon and Salmon Typhoon were identified, using OpenAI’s services for debugging code and translating technical papers. These groups, also known as Aquatic Panda and Maverick Panda, were observed exploring the potential of large language models like ChatGPT in their operations.
Another threat actor identified was Emerald Sleet, associated with North Korea and also known as Kimsuky, which utilized OpenAI to generate code and content for phishing attacks. Additionally, Iran-based Crimson Sandstorm was found to be using OpenAI to generate phishing content and research methods to evade malware detection.
OpenAI emphasized that the observed activities were in line with assessments and that their AI models could only offer limited assistance in malicious cybersecurity tasks beyond what could be achieved with publicly available, non-AI-powered tools. As a result, all identified accounts have been disabled.
In conclusion, OpenAI reiterated its commitment to fostering innovation, collaboration, and information sharing to combat malicious activities in the digital ecosystem while also enhancing the overall user experience for legitimate users.