OpenAI, Google, Anthropic and other companies developing generative AI are continuing to improve their technologies and releasing better and better large language models. In order to create a common approach for independent evaluation on the safety of those models as they come out, the UK and the US governments have signed a Memorandum of Understanding. Together, the UK’s AI Safety Institute and its counterpart in the US, which was announced by Vice President Kamala Harris but has yet to begin operations, will develop suites of tests to assess the risks and ensure the safety of “the most advanced AI models.”
They’re planning to share technical knowledge, information and even personnel as part of the partnership, and one of their initial goals seems to be performing a joint testing exercise on a publicly accessible model. UK’s science minister Michelle Donelan, who signed the agreement, told The Financial Times that they’ve “really got to act quickly” because they’re expecting a new generation of AI models to come out over the next year. They believe those models could be “complete game-changers,” and they still don’t know what they could be capable of.
According to The Times, this partnership is the first bilateral arrangement on AI safety in the world, though both the US and the UK intend to team up with other countries in the future. “AI is the defining technology of our generation. This partnership is going to accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society,” US Secretary of Commerce Gina Raimondo said. “Our partnership makes clear that we aren’t running away from these concerns — we’re running at them. Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”
While this particular partnership is focused on testing and evaluation, governments around the world are also conjuring regulations to keep AI tools in check. Back in March, the White House signed an executive order aiming to ensure that federal agencies are only using AI tools that “do not endanger the rights and safety of the American people.” A couple of weeks before that, the European Parliament approved sweeping legislation to regulate artificial intelligence. It will ban “AI that manipulates human behavior or exploits people’s vulnerabilities,” “biometric categorization systems based on sensitive characteristics,” as well as the “untargeted scraping” of faces from CCTV footage and the web to create facial recognition databases. In addition, deepfakes and other AI-generated images, videos and audio will need to be clearly labeled as such under its rules.
Trending Products