OpenAI's policy no longer explicitly bans the use of its technology for 'military and warfare' - Help with AI for latest technology

Breaking

Underreview for latest technology gadgets and worldwide technologies, AI, Machine Learning, Neural networks, Artificial intelligence, Tensorflow, Deep Learning, DeepAI, Python,JavaScript,OpenCv, ChatBot, Natural Language Processing,Scikit-learn

Saturday, 13 January 2024

OpenAI's policy no longer explicitly bans the use of its technology for 'military and warfare'

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

This article originally appeared on Engadget at https://ift.tt/zP6KYOv

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/zP6KYOv

No comments:

Post a Comment

Guys Comments for Revolutionary Change!!!