Site iconSite icon ForkLog

OpenAI Blocks State-Linked Hackers from Accessing ChatGPT

OpenAI Blocks State-Linked Hackers from Accessing ChatGPT

Developers at OpenAI have thwarted the use of their products by cybercriminals affiliated with various governments.

In collaboration with Microsoft Threat Intelligence, they identified and removed five accounts belonging to malicious actors. These include:

The investigation revealed that Charcoal Typhoon used AI to explore various cybersecurity tools, debug code, and create content for phishing campaigns.

Salmon Typhoon translated technical documents, studied the activities of intelligence agencies and regional threat actors, and investigated ways to conceal malicious processes in systems.

Crimson Sandstorm used ChatGPT to create phishing applications and websites.

Emerald Sleet, besides writing malicious code, sought experts and organizations involved in defense issues in the Asia-Pacific region.

Forest Blizzard requested information on satellite communication protocols and radar imaging technology.

An internal investigation by OpenAI found that the GPT-4 chatbot version offers only limited capabilities for executing malicious tasks. Nevertheless, developers intend to enhance the security of their products by studying real-world use in cybercriminal activities and implementing strengthened monitoring of user interactions with the platform. 

Earlier, ForkLog reported that OpenAI introduced a “memory” feature for ChatGPT.

Exit mobile version