OpenAI and Anthropic agree to share their models with the US AI Safety Institute


There is OpenAI and Anthropic agreed Sharing AI models – before and after release – with US AI Security Institute. agency established through Executive order by President Biden in 2023will provide security feedback to companies to improve their models. Sam Altman, CEO of OpenAI indicated agreement earlier this month.

“Security is critical to powering breakthrough technological innovation. With these agreements, we look forward to beginning our technical collaboration with Anthropic and OpenAI to advance the science of AI security,” said Elizabeth Kelly, director of the US AI Security Institute. “These agreements are just the beginning, but an important milestone as we work to help responsibly manage the future of artificial intelligence.”

The US AI Security Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests, and best practices for testing and evaluating potentially dangerous artificial intelligence systems. “As much as AI has the potential to bring great good, it also has the potential to do great harm, from AI-powered cyberattacks to AI-engineered bioweapons that could endanger millions of lives. “,” Vice President Kamala Harris he said at the end of 2023 after the creation of the agency.

In turn, the first contract is implemented through a (formal but non-binding) Memorandum of Understanding. The agency will have access to each company’s “major new models” before and after they are made public. The agency describes the contracts as collaborative, risk-reducing research that will assess capabilities and safety. It will also cooperate with the US Institute for Artificial Intelligence Security UK AI Security Institute.

The US AI Security Institute did not mention other companies fighting artificial intelligence. Engadget sent an email to Google that has since gone viral updated chatbot and image generator models this week, for a comment on its release. We’ll update this story if we hear back.

It comes as federal and state regulators seek to create AI safeguards while the fast-growing technology is still nascent. California state assembly on Wednesday has been confirmed An AI safety bill (SB 10147) that would require safety testing for artificial intelligence models that cost more than $100 million to develop or require a certain amount of computing power. The bill requires AI companies to have kill switches that can shut down models if they become “incompetent or unmanageable.”

Unlike a nonbinding agreement with the federal government, California’s bill would have some enforcement teeth. It gives the state’s attorney general a license to sue if AI developers fail to comply, especially in cases of a threat level. However, that requires one more process vote — and the signature of Gov. Gavin Newsom, who will decide whether to give the green light by Sept. 30.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *