OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’


A few days ago, OpenAI usage policy page the company clearly prohibits the use of its technology for “military and combat” purposes. That line has since been deleted. As mentioned for the first time The Interceptcompany updated the page “To be clearer and to provide more service-specific guidance,” the January 10 change note says. It still prohibits using large language models (LLM) for anything that could cause harm, and warns people not to use its services to “weaponize or exploit.” However, the company removed the “military and war” language.

While we have yet to see its real-life results, the word change comes at a time when military agencies around the world are showing interest in using artificial intelligence. “Given the use of artificial intelligence systems to target civilians in Gaza, the decision to remove the words ‘military and war’ from OpenAI’s permitted use policy is a significant moment,” said Sarah Myers West, managing director of AI Now. given.

The explicit mention of “military and warfare” on the list of prohibited uses indicated that OpenAI would not be able to work with government agencies such as the Department of Defense, which typically offer lucrative deals to contractors. Currently, the company does not have any products that can directly kill or physically harm someone. But as The Intercept said its technology could be used for tasks such as writing code and processing purchase orders for things that could be used to kill people.

When asked about the change in its policy, OpenAI spokesman Nico Felix told the publication that the company “aimed to create a universal set of principles that are both easy to remember and apply, especially as our tools are now used by everyday users globally. can now build a GPT.” “A principle like ‘do no harm to others’ is broad but easily grasped and relevant in multiple contexts,” Felix explained, adding that OpenAI “showed weapons and injuries in particular as clear examples to others. ” However, the spokesman reportedly declined to clarify whether the ban on using its technology to “harm” others included all types of military use beyond weapons development.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *