OpenAI has quietly removed a clause from its usage policy that prohibited the use of its chatbot, ChatGPT, for “military purposes,” according to The Intercept.
Until January 10, the Usage Policy included a ban on “activities with a high risk of physical harm, including the development of weapons and military actions.”
The new provisions maintain a prohibition on using services to “cause harm to oneself or others,” citing “weapons development” as an example, but the outright ban on “military” applications has been removed.
According to OpenAI, the changes aim to make the document “more understandable and readable.”
“Principles like ‘do not harm others’ are broad but easily understandable and relevant in many contexts. Additionally, we specifically highlighted weapons and harming others as vivid examples,” a company representative stated.
However, the company did not explain why the military-related wording was removed.
AI Now’s Managing Director, Myers West, suggested that artificial intelligence was used in Israel’s attacks on the Gaza Strip. He speculated that this might be why OpenAI removed the ban on military applications of its technologies.
In February 2023, the U.S. government issued a declaration on the use of artificial intelligence in the armed forces, which included “human accountability.”
Earlier, UN Secretary-General António Guterres called for measures against “killer robots.”
