Thu. Nov 21st, 2024
Occasional Digest - a story for you

Press play to listen to this article

Voiced by artificial intelligence.

LONDON — As Elon Musk urged humanity to get a grip on artificial intelligence, in London ministers were hailing its benefits.

Rishi Sunak’s new technology chief Michelle Donelan on Wednesday unveiled the government’s long-awaited blueprint for regulating AI, insisting a heavy-handed approach is off the agenda.

At the heart of the innovation-friendly pitch is a plan to give existing regulators a year to issue “practical guidance” for the safe use of machine learning in their sectors based on broad principles like safety, transparency, fairness and accountability. But no new legislation or regulatory bodies are being planned for the burgeoning technology.

It stands in contrast to the strategy being pursued in Brussels, where lawmakers are pushing through a more detailed rulebook, backed by a new liability regime.

Donelan insists her “common-sense, outcomes-oriented approach” will allow the U.K. to “be the best place in the world to build, test and use AI technology.”

Her department’s Twitter account was flooded with content promoting the benefits of AI. “Think AI is scary? It doesn’t have to be!” one of its posts stated on Wednesday.  

But some experts fear U.K. policymakers, like their counterparts around the world, may not have grasped the scale of the challenge, and believe more urgency is needed in understanding and policing how the fast-developing tech is used.

“The government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software,” Michael Birtwistle, associate director of data and AI law and policy at the Ada Lovelace Institute, said. It has “significant gaps,” which could leave harms “unaddressed,” he warned.

“We shouldn’t be risking inventing a nuclear blast before we’ve learnt how to keep it in the shell,” Connor Axiotes, a researcher at the free-market Adam Smith Institute think tank, warned.

Elon wades in

Hours before the U.K. white paper went live, across the Atlantic an open letter calling for labs to immediately pause work training AI systems to be even more powerful for at least six months went live. It was signed by artificial intelligence experts and industry executives, including Tesla and Twitter boss Elon Musk. Researchers at Alphabet-owned DeepMind, and renowned Canadian computer scientist Yoshua Bengio were also signatories.

The letter called for AI developers to work with policymakers to “dramatically accelerate development of robust AI governance systems,” which should “at a minimum include: new and capable regulatory authorities dedicated to AI.” 

AI labs are locked in “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter warned.

Rishi Sunak’s new technology chief Michelle Donelan unveiled the government’s blueprint for regulating AI, insisting a heavy-handed approach is off the agenda | Leon Neal/Getty Images

Back in the U.K., Ellen Judson, head of the Centre for the Analysis of Social Media at the think tank Demos, warned that the U.K. approach of “setting out principles alone” was “not enough.”

“Without the teeth of legal obligations, this is an approach which will result in a patchwork of regulatory guidance that will do little to fundamentally shift the incentives that lead to risky and unethical uses of AI,” she said.

But Technology Minister Paul Scully told the BBC he was “not sure” about pausing further AI developments. He said the government’s proposals should “dispel any of those concerns from Elon Musk and those other figures.”

“What we’re trying to do is to have a situation where we can think as government and think as a sector through the risks but also the benefits of AI — and make sure we can have a framework around this to protect us from the harms,” he said.

Long time coming

Industry concerns about the U.K.’s ability to make policy in their area are countered by some of those who have worked closely with the British government on AI policy. 

Its approach to policymaking has been “very consultative,” according to Sue Daley, a director at the industry body TechUK, who has been closely following AI developments for a number of years.

In 2018 ministers set up the Centre for Data Ethics and Innovation and the Office for AI, working across the government’s digital and business departments until it moved to the newly-created Department for Science, Innovation and Technology earlier this year. 

The Office for AI is staffed by a “good team of people,” Daly said, while also pointing to the work the U.K.’s well-regarded regulators, like the Information Commissioner’s Office, had been doing on artificial intelligence “for some time.”

Greg Clark, the Conservative chairman of parliament’s science and technology committee, said he thought the government was right to “think carefully.” The former business secretary stressed that is his own view rather than the committee view.

“There’s a danger in rushing to adopt extensive regulations precipitously that have not been properly thought through and stress-tested, and that could prove to be an encumbrance to us and could impede the positive applications of AI,” he added. But he said the government should “proceed quickly” from white paper to regulatory framework “during the months ahead.”

Public view

Outside Westminster, the potential implications of the technology are yet to be fully realized, surveys suggest.

Public First, a Westminster-based consultancy, which conducted a raft of polling into public attitudes to artificial intelligence earlier this month, found that beyond fears about unemployment, people were pretty positive about AI.

“It certainly pales into insignificance compared to the other things that they are worried about like the prospect of armed conflict, or even the impact of climate change,” James Frayne, a founding partner of Public First, who conducted the polling said. “This falls way down the priority list,” he said.

But he cautioned this could change. 

“One assumes that at some point there will be an event which shocks them, and shakes them, and makes them think very differently about AI,” he added. 

“At that point there will be great demands for the government to make sure that they’re all over this in terms of regulation. They will expect the government to not only move very quickly, but to have made significant progress already,” he said.

Source link