The U.S. Artificial Intelligence Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), announced today it has signed first-of-their-kind research and testing agreements with Anthropic and OpenAI – a key step in allowing the government to access and assess the companies’ latest AI models.

NIST has previously hinted that talks of testing agreements with leading AI companies were in the works. Still, today’s announcement marks the first official agreement between government and industry to advance AI safety.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, the director of the U.S. AISI, said in a press release. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

Each company’s Memorandum of Understanding allows the AISI to access major new AI models before and after their public release. The agreements will enable collaborative research on AI capabilities and safety risks, as well as ways to mitigate those risks.

The AISI also plans to work with its partners at the U.K. AI Safety Institute to provide feedback to Anthropic and OpenAI on potential safety improvements they can make to their models.

The U.S. and U.K. AI Safety Institutes signed a partnership agreement in April, allowing them to immediately work together to test AI models.

For example, when Anthropic launched Claude 3.5 Sonnet in June, the company said it shared the model with the U.K. AISI for pre-deployment safety evaluation.

“The UK AISI completed tests of 3.5 Sonnet and shared their results with the US AI Safety Institute (US AISI) as part of a Memorandum of Understanding, made possible by the partnership between the US and UK AISIs,” Anthropic explained in a June blog post.

The U.S. AISI is a fairly new organization. NIST first announced the AISI in November 2023 at the direction of President Biden to support the responsibilities assigned to the Department of Commerce under the administration’s October 2023 AI executive order.

NIST said today’s agreements will help to advance AI safety by building on President Biden’s AI executive order and the voluntary commitments made to the Biden administration by leading AI model developers.

“Safe, trustworthy AI is crucial for the technology’s positive impact. Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” Jack Clark, co-founder and head of policy at Anthropic, said in a statement to MeriTalk.

“This strengthens our ability to identify and mitigate risks, advancing responsible AI development. We’re proud to contribute to this vital work, setting new benchmarks for safe and trustworthy AI,” Clark added.

OpenAI did not immediately respond to a request for comment.

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags