OpenAI, Google, Anthropic and other companies developing generative artificial intelligence continue to improve their technology and release better and better big language models. To create a common approach for independent safety assessment of these models when they come out, UK and US governments there is signed Memorandum of Understanding. Together, the UK’s AI Security Institute and its US counterpart announced By Vice President Kamala Harris, but not yet operational, it will develop a suite of tests to assess risks and ensure the safety of “the most advanced AI models.”
As part of the partnership, they plan to share technical knowledge, data and even personnel, and one of their initial goals appears to be to conduct a joint testing exercise on a publicly available model. British Science Minister Michelle Donelan, who signed the agreement, said this The Financial Times They “need to move really fast” because they expect the next generation of AI models to come out within the next year. They believe these models could be “total game changers” and they still don’t know what they’re capable of.
according to The Times, The partnership is the world’s first bilateral agreement on AI security, although both the US and the UK intend to collaborate with other countries in the future. “AI is the defining technology of our generation. This partnership will accelerate the work of both of our institutions across the spectrum of risks to our national security and our broader society,” said US Commerce Secretary Gina Raimondo. “Our partnership makes it clear that we’re not running away from these concerns—we’re running into them. Through our collaboration, our Institutes will better understand AI systems, conduct more robust assessments, and provide more rigorous instruction.”
While this particular partnership is focused on testing and evaluation, governments around the world are also developing regulations to keep AI tools under control. The White House in March signed the order aims to ensure that federal agencies only use artificial intelligence tools that “do not endanger the rights or safety of the American people.” A few weeks ago, the European Parliament Approved legislation governing artificial intelligence. It will ban “artificial intelligence that manipulates human behavior or exploits people’s vulnerabilities”, “biometric classification systems based on sensitive characteristics”, as well as the “indiscriminate scraping” of CCTV footage and the internet to create facial recognition databases. Additionally, deepfakes and other AI-generated images, videos, and audio must be clearly labeled according to its rules.