Site icon Occasional Digest

UK, US slated to announce AI safety partnership – POLITICO

Occasional Digest - a story for you

LONDON — London and Washington are to announce a “close collaboration” on AI safety as early as Wednesday, a U.K and U.S. official confirmed to POLITICO. 

The collaboration is expected to marry new guardrails the White House placed on artificial intelligence development in this week’s executive order (EO) with existing work by the United Kingdom’s “Frontier AI Taskforce.”

“We plan to announce close bilateral collaboration with the U.S. safety institute this week,” a U.K. official close to the planning of Britain’s AI safety summit told POLITICO. The person was granted anonymity to talk about the summit, which will take place at Bletchley Park on Nov. 1 and 2. 

Both countries will be announcing their own version of the institutes as the summit kicks off. In a speech Wednesday in London, U.S. Vice President Kamala Harris, who is representing the Biden administration at the summit, will announce the United States AI Safety Institute, which will be housed at the Department of Commerce, according to a U.S. official granted anonymity to discuss internal plans. 

“It will work to create guidelines, standards and best practices for evaluating and mitigating the full spectrum of risks,” the U.S. official added. “We must address the full spectrum of risk, from potentially catastrophic risks to societal harms that are already happening such [as] bias, discrimination and the proliferation of misinformation.” 

Meanwhile, British Prime Minister Rishi Sunak has said he will set up an “AI Safety Institute” that will examine, evaluate and test new types of the emerging technology. Sunak said the new institute will build on the work of Britain’s existing Frontier AI Taskforce, which he said has already been granted “privileged access” to the technology models of leading AI companies like Google DeepMind, Anthropic and OpenAI.

The countries will “also participate in information sharing and research collaboration,” said the U.S. official, and will be making their own separate announcements. The U.S. will also share information with other similar safety institutes in other countries. 

The White House executive order signed Monday will require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. It is designed to ensure AI systems are safe before companies make them public. Under the EO, Washington will set up an “AI Safety and Security Board.”

The U.K.’s Tech Secretary Michelle Donelan told POLITICO that it was easier for the U.S. to lead the industry to be more transparent because it is dominated by American firms | Dan Kitwood/Getty Images

“We’re trying to lead with substance here and we’re trying to engage with other countries with substance and this is a vision, and the Vice President will lay it out in her speech, […] for how the United States is seeing AI policy and AI governance,” said the White House special adviser on AI, Ben Buchanan, on the forthcoming episode of the POLITICO Tech podcast on the timing of the EO coming in the same week as the U.K. AI summit. Harris is giving a speech in London on the administration’s AI initiatives, including the EO on Wednesday afternoon.

The U.K.’s Tech Secretary Michelle Donelan told POLITICO on Tuesday that it was easier for the U.S. to lead the industry to be more transparent because it is dominated by American firms, but there are aspects of the work that the U.K. can move faster on.

“I know America and other countries will have plans for institutes too, but we can do it a lot quicker, because we already have that initial organization in the [Frontier AI Taskforce],” she said. “We’ve already got that expertise setup, funding in there, and our processes allow us to do that at a quicker speed.”

“The future vision is to secure the safety of models before they are released,” Sunak said Thursday. Britain is expected to publish some information publicly, but will reserve more sensitive national security intel to a smaller group of like-minded governments.

Source link

Exit mobile version