Thu. Nov 21st, 2024
Occasional Digest - a story for you

Press play to listen to this article

Voiced by artificial intelligence.

In the race to rein in artificial intelligence, Western governments have hit a major bump in the road: they all want to win. 

Officials from the European Union, the United States and other major economies are competing to write the definitive rules for artificial intelligence, including for the likes of OpenAI’s ChatGPT and Google’s Bard. 

Rival summits will be held in the Fall with the aim to reach a coordinated plan between Western governments on how to regulate the emerging technology. But these upcoming events risk entrenching divisions between countries in ways that threaten to undermine efforts to draw up a unified international rulebook on AI. To make matters worse, some of the talks are now getting personal. 

“Everyone is committed to making this work,” said a European Commission official involved in negotiations over AI rules. “But right now, there are a lot of egos in the room.”

Western politicians are keen to show voters they are on top of a technology that burst into the public’s consciousness, almost overnight.

AI advocates say the economic opportunities offered by rolling out the technology range from quicker diagnoses of diseases to the development of autonomous vehicles. Skeptics warn AI could lead to a surge in unemployment and — in the very worst scenarios — global armageddon, if automated systems gain uncontrollable power.

Experts argue a common Western rulebook is vital to allow companies that use the technology to operate with ease internationally because AI is inherently a cross-border tool. Common rules would also protect people from Berlin to Boston from the technology’s potential harms, including minority groups potentially suffering discrimination from automated AI tools.

“We really don’t have a systematic global response to what we should do about the many risks,” said Gary Marcus, a psychologist and cognitive scientist at New York University who wants to see greater checks on AI. “Every country is trying to do something on its own.”

While governments in the West argue among themselves, China is pressing ahead with its own rulebook. The Chinese Communist Party says it’s seeking to protect its citizens from the AI’s risks. But Beijing’s critics say its regulation will be designed to serve its authoritarian ends.

Governments in the West worry that China’s totalitarian take on AI, including the technology’s wholesale use for national security purposes, may gain ground across the developing world if they don’t promote their own blueprint as an alternative.

For this article, POLITICO spoke to six Western officials working on the AI summits, who were granted anonymity to discuss the challenges they face.

In September, officials from the G7 group of Western industrialized economies are expected to meet to finalize a blueprint for how to regulate AI, according to two officials with direct knowledge of the talks. 

Western officials worry that China’s totalitarian take on AI may gain ground across the developing world if the West does not promote its own blueprint as an alternative | Mark Ralston/AFP via Getty Images

That gathering will then be followed by a more formal summit of G7 leaders, likely in October or November, the officials said. European and U.S. officials hope the G7 work will bolster their joint attempt to limit the risks of generative AI and develop safe ways to use the technology to jumpstart economic growth.

The U.K. has also pitched itself as a world leader on AI safety and is expected to host its own summit, in London in November. British Prime Minister Rishi Sunak views the event as a chance to enhance the country’s role as a global player seven years after the country’s Brexit referendum. 

Officials involved in these overlapping AI projects describe a complex diplomatic tussle. International rivalries, diplomatic realpolitik and — above all — fears about how China will promote its own AI rules have complicated preparations for the meetings. Not all Western capitals, particularly within the EU, view Beijing’s stance on AI as contradictory to their own.

Divisions on how best to police the technology have also slowed down the process of reaching agreement. The EU wants to take a more aggressive stance on policing AI, while the U.S., U.K. and Japan would prefer more industry-led commitments. It’s unclear whether these differences can be overcome before the proposed summits later this year.

Egos, not policy

Three Western officials, who spoke on the condition of anonymity to discuss internal deliberations, complained that people’s egos — and not efforts to regulate AI — had taken over discussions linked to the G7 and U.K. summit events. 

Since the EU first proposed AI oversight to the G7 work in late April and followed that up with a two-page memo in late May to the U.S., representatives from cooperating governments have been sparring privately to take credit for the West’s plans, the officials added.

That behavior has included adding to the draft G7 document in ways that favored their own stance on AI governance; taking credit, publicly, for the conclusions of the upcoming G7 summit; and dismissing others’ views in often backhanded comments while drafting proposals. 

Brussels wants its own AI legislation, which is expected to be completed by December, to form the basis of measures adopted by other leading democracies, according to two European Commission officials involved in that process. That plan involves pushing for mandatory curbs on how AI is deployed in so-called “high-risk” cases like the use of facial recognition technology in law enforcement. 

Washington is eager to press its more industry-friendly approach, and the White House published a set of voluntary commitments that Amazon and Microsoft have agreed to support. These non-binding pledges, which include promises to allow outsiders to test the firms’ AI systems for biases and other societal safeguards, are, in part, an effort to get ahead of similar proposals at the heart of the G7’s upcoming summit, according to one U.S. official. 

“Any kind of international level agreement will have to be at the level of very vague principles,” said Suresh Venkatasubramanian, a computer scientist at Brown University, who co-wrote the White House’s guidelines for how U.S. agencies should oversee AI. “Everyone wants to do their own thing.”

Source link