Fri. Nov 22nd, 2024
Occasional Digest - a story for you

Press play to listen to this article

Voiced by artificial intelligence.

This is a make-or-break week for policing artificial intelligence.

On Monday, the White House published its long-awaited executive order on everything from funding new forms of the technology to clamping down on potential abuses. The same day, the G7 group of leading democratic economies announced a separate nonbinding code for reining in so-called generative AI — tools like ChatGPT that have captured the public’s imagination this year. 

Not to be outdone, the U.K. will hold a two-day summit on AI safety starting Wednesday, with a guest list including U.S. Vice President Kamala Harris and European Commission President Ursula von der Leyen. In the background, the European Union is also working on its own new laws expected to be finished by December. 

Yet this week’s carousel of AI-focused announcements and geopolitical set-piece events also underscores one key truth: Western governments are still scrambling to project themselves as leaders in a technological race that both enraptured and alarmed their citizens.

Many EU, U.K. and U.S. officials who spoke with POLITICO were eager to play up how democracies were pulling in the same direction. But several of them still took petty potshots at their counterparts in other jurisdictions, either for not legislating quickly enough or for failing to promote innovation.

“We’re all singing from the same hymn sheet,” said one Western official involved in those global AI negotiations, who was granted anonymity to discuss sensitive matters. “But much of this is about retail politics, and politicians want to show voters they are in charge.”

At stake is who gets to control the future development of a technology that has raised hopes of dramatic progress in areas such as medical science as well as triggering fears it could undermine democracy and even start nuclear war. 

Over the last year, the rise of OpenAI’s ChatGPT and Google’s Bard generative AI services have provided an early look at what may be ahead. Those systems can produce life-like text, video and images in ways that make people believe such automated services have a mind of their own.

Politicians and policymakers have been eager to show off their credentials for handling AI, often collaborating with like-minded governments, while also competing to portray themselves as global trend-setters. 

At the U.K.’s summit this week, leaders are expected to back new efforts to promote so-called “responsible AI,” according to the event’s final communiqué obtained by POLITICO. That term — mirroring a separate AI governance effort announced by the United Nations last week — refers to stopping the technology from causing societal harm. 

Many of the Western governance efforts spread across the G7 countries, EU and North America share similar history dating back to nonbinding commitments produced by the Organization for Economic Cooperation and Development, a group of mostly rich countries, in 2019. 

That includes pledges by companies and governments to boost transparency on how AI systems are created; greater checks on how these models access people’s data; and greater international collaboration on standards and how these complex services are rolled out globally.

At the U.K.’s summit this week, leaders are expected to back new efforts to promote so-called “responsible AI,” according to the event’s final communiqué obtained by POLITICO | Peter Nicholls/POOL/AFP via Getty Images

The White House’s sprawling 110-page executive order is a clear example of Joe Biden’s administration speaking to a predominantly domestic audience while portraying itself as a world leader. Under the proposals, U.S. federal agencies will be given greater powers to oversee how AI is rolled out in their specific areas — like housing, consumer protection and national security.

Washington will also use its extensive purchasing power via government contracts to nudge AI companies to improve their transparency and data protection regimes, all while promoting the U.S. as a key innovator in a technology that will likely shape the next 20 years. One American official, speaking on the condition of anonymity, said the U.S. wanted to show the world it could foster world-leading companies and trend-setting oversight.

“There’s an element of moral authority here,” Nathaniel Fick, the U.S. Ambassador at Large for Digital Policy and Cyberspace, told POLITICO’s tech podcast. “It puts the marker down that the United States is going forward holding itself — this administration, the U.S. government, holding itself — to a standard.”

Euroskeptics 

But Washington’s push on AI governance was received with significant skepticism in Brussels, where EU officials were eager to promote their own version of AI lawmaking – via the bloc’s upcoming Artificial Intelligence Act — to as many countries as possible. 

Three European Commission officials told POLITICO they welcomed the Biden Administration’s efforts. But questioned how much could be done without fundamentally updating American legislation akin to what was planned in the EU. One of those individuals said Brussels would continue at pace with its own rulebook. 

The EU proposals will outlaw certain versions of the technology like the use of AI to provide social scores for people — something that China, which is also promoting its global leadership on AI, has already unveiled. 

Washington’s push on AI governance was received with significant skepticism in Brussels, where EU officials were eager to promote their own version of AI lawmaking | Chip Somodevilla/Getty Images

“Everyone recognizes the potential risks,” Věra Jourová, the European Union’s digital chief, told POLITICO. 

Currently, American and European officials are working together on technical standards for AI as part of the EU-U.S. Trade and Technology Council, a biannual meeting between senior officials that will next be held in Washington in December. 

But two of the Commission officials played down the idea that Europe is falling behind the White House’s leadership. Unlike in Brussels, there is a lack of AI-focused legislation coming from Capitol Hill. EU politicians also remain optimistic that other countries will copy the bloc’s legislation — in a repeat of what has happened in other digital areas like data protection and social media laws.

“We like what we see from the executive order. But it’s not the same as what we’re doing here,” said one of those Commission officials. “We’re legislating — and they are not.” 

London is taking a different path — to focus almost exclusively on the potential existential risks if artificial intelligence falls into the wrong hands. 

As part of the two day summit this week, British officials have invited like-minded countries, as well as potential adversaries like China, to hash out what to do about safety concerns linked to bioweapons and online disinformation. They also want to create a new network of researchers modeled on the U.N.’s Intergovernmental Panel on Climate change that will publish an annual state of AI report.

Rishi Sunak, the U.K.’s prime minister, also announced plans for a so-called “AI Safety Institute” that would oversee how the likes of OpenAI and Google’s Deepmind developed their latest AI models. One British official, who was granted anonymity to speak about ongoing deliberations, hoped that the U.S. — which announced similar plans — would join forces with the British initiative, particularly on potential national security concerns. 

“My vision, and our ultimate goal, should be to work towards a more international approach to safety,” said Sunak ahead of this week’s summit. “AI does not respect borders, so we cannot do this alone.”

Vincent Manancourt, Joe Bambridge, Steven Overly and Clothilde Goujard contributed reporting from London, Washington and Brussels.

Source link