Sat. Nov 2nd, 2024
Occasional Digest - a story for you

Press play to listen to this article

Voiced by artificial intelligence.

LULEÅ, Sweden — Top European and American officials gathered in Sweden for tech and trade talks on Wednesday and tried to hammer out an answer to one of the toughest problems facing the world: how to police artificial intelligence.

Over an hour-long lunch of cod loin and chocolate praline, officials from Washington and Brussels worked up a voluntary “code of conduct” designed to prevent harm, including from the most advanced artificial intelligence technology known as generative AI — like OpenAI’s ChatGPT and Google’s Bard. Within months, the tech has taken the public by storm, triggering hopes as well as anxieties for the future of humanity.

While some have been thrilled by AI’s potential to generate computer code and solve medical problems, others fear it will put millions of people out of work and could even threaten security.

“Democracy needs to show we are as fast as the technology,” Margrethe Vestager, Europe’s digital commissioner, told reporters as she entered the EU-U.S. Trade and Tech Council (TTC) summit in the small industrial city of Luleå, 150 kilometers south of the Arctic Circle. 

The TTC has grown into a twice-yearly gathering where senior transatlantic leaders like U.S. Secretary of State Antony Blinken and the European Union’s trade chief Valdis Dombrovskis hash out common approaches on everything from semiconductors to green tech investments. This week’s fourth edition is dominated by how to push back against China, where the two sides still struggle to agree.

But when it comes to the rise of AI, the U.S. and EU are increasingly anxious to make strides together. 

“It’s coming at a pace like no other technology,” said Gina Raimondo, the U.S. commerce secretary, referring to generative AI.  “It will take a little bit of time for the U.S. Congress or (a country’s) parliament or our other regulatory agencies to catch up.”

But the joint plan is still in rough draft, at best. Vestager told POLITICO that the voluntary code of conduct was currently a two-page briefing note produced by the European Commission that she had personally handed over to Raimondo Wednesday.

The goal, according to the Danish politician, is to come up with non-binding standards around transparency, risk audits and other technical details for companies developing the technology. That would then be presented to leaders of the G7 as a joint transatlantic proposal in the fall.

With mandatory AI rules years away, a voluntary code is, at best, a stopgap until binding legislation is in place.

“Democracy needs to show we are as fast as the technology,” said Europe’s digital commissioner Margrethe Vestager as she entered the EU-U.S. Trade and Tech Council | JOnas Ekstromer/TT News Agency/AFP via Getty Images

“We agree we will work on this, bring colleagues on board, in order to fit into the G7 process,” Vestager told reporters.

If that effort fails, it could potentially leave an opening for China to promote its own authoritarian version of the technology worldwide.

Where Europe and the U.S. diverge

Yet there remains a massive AI-shaped divide between Washington and Brussels on the rules.

The EU — bolstered by a track record of writing much of the digital rulebook that now dominates the Western world — is moving ahead with mandatory rules for artificial intelligence that would require firms to not use the technology in pre-defined “harmful” ways. By the end of December, European officials hope to complete the EU’s AI Act, after tough political negotiations that have dragged on for over two years. 

But European countries and members of the European Parliament, both of which need to agree on a final text, are at loggerheads on some key aspects of the text — notably, on facial recognition in public venues. The tech industry, meanwhile, has balked at what it considers too onerous oversight of generative AI.

The effort in Brussels has led the U.S. industry, which is investing billions of dollars into AI, to keep its eyes on the EU for concrete legislation — much like what happened when the bloc started making laws on privacy and online content.

The U.S., on the other hand, prefers a more hands-off approach, relying on industry to come up with its own safeguards. Ongoing political divisions within Congress make it unlikely any AI-specific legislation will be passed before next year’s U.S. election.

The Biden administration has made international collaboration on AI a policy priority, especially because a majority of the leading AI companies like Google, Microsoft and OpenAI, are headquartered in the U.S. For Washington, helping these companies compete against China’s rivals is also a national security priority.

In recent weeks, the White House has thrown its doors open to industry, hosting the CEOs of four major AI companies earlier in May for a private discussion. It has launched efforts to get tech companies to commit to voluntarily rules on responsible behavior. And when it comes to international standard-setting, it has been pushing the risk-management framework developed in the U.S. by the National Institutes of Standards and Technology.

Building the West’s approach

On Wednesday, senior U.S. and EU officials tried to circumvent those faultines with an approach that would build on existing global principles proposed by the Organization for Economic Cooperation and Development. They aimed to go further than the OECD by specifically calling out the potential pitfalls of generative AI. 

An outline agreement would offer companies more certainty on how this emerging technology will be policed by the West’s two biggest economic blocs. The goal is to fast-track a voluntary code, though it will likely build on existing European rules for AI and it’s unclear if U.S. officials and companies will back such an approach.

“Regulatory clarity will be a good thing,” Sam Altman, chief executive of OpenAI, the tech firm behind ChatGPT, said during an event in Paris last week, while on a European tour that also included Warsaw, Madrid, Munich and London. The tech boss met virtually with Vestager on Wednesday during which they discussed the proposed voluntary code of conduct.

Still, there are questions over whether the EU is speaking with one voice.

Some officials in Brussels are hoping they can frontload some of the bloc’s rules in a so-called AI Pact , a separate voluntary pledge that companies can sign up to related to Europe’s upcoming AI Act that will likely come into force in 2026. 

Thierry Breton, the EU’s internal market commissioner, said that any regulatory coordination with like-minded partners such as the U.S. would be based on Europe’s existing approach. “If others want to get inspired, of course, they are welcome,” he said.

Source link