Vera Jourova, deputy head of the European Commission, says safeguards need to be put in place to counter disinformation.
Unveiled late last year, Microsoft-backed OpenAI’s ChatGPT has become the fastest-growing consumer application in history and set off a race among tech companies to bring generative AI products to market.
Concerns however are mounting about potential abuse of the technology and the possibility that bad actors and even governments may use it to produce far more disinformation than before.
“Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation,” Jourova told a press conference on Monday.
“Signatories who have services with a potential to disseminate AI-generated disinformation should, in turn, put in place technology to recognise such content and clearly label this to users,” she said.
Companies such as Google, Microsoft and Meta Platforms that have signed up to the EU Code of Practice to tackle disinformation should report next month on AI safeguards they have put into place, Jourova said.
She warned Twitter, which quit the Code of Practice last week, to expect more regulatory scrutiny.
“By leaving the code, Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinised vigorously and urgently,” Jourova said.