OpenAI and Anthropic Partner with US on AI Safety and Testing

OpenAI and Anthropic Partner with US on AI Safety and Testing

The U.S. Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) has announced agreements with AI startups OpenAI and Anthropic to collaborate on AI safety research, testing, and evaluation. These agreements will grant the institute early access to major new AI models from both companies before and after their public release. The partnerships aim to conduct collaborative research on evaluating the capabilities and safety risks of these models and develop methods to mitigate potential risks.

The agreements are part of a broader effort to address concerns over the safe and ethical use of AI technologies, highlighted by recent regulatory scrutiny in California. The U.S. AI Safety Institute will also work in conjunction with the U.K. AI Safety Institute to provide feedback on safety improvements. Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized the importance of these agreements as a significant milestone in the responsible development of AI. Both OpenAI and Anthropic have expressed support for the partnership, underscoring the necessity of ensuring AI technologies are safe and trustworthy.

Summary

Other news in technology