Italy is the first Western country to take such action against the popular artificial intelligence chatbot.
The announcement on Friday made Italy the first Western country to take such action against the popular AI chatbot.
The Italian Data Protection Authority described its action as provisional “until ChatGPT respects privacy”. Its measure involves temporarily limiting the company from holding Italian users’ data.
The watchdog said ChatGPT developer OpenAI had no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
It further referenced a data breach on March 20 when user conversations and payment information were compromised, a problem the United States firm blamed on a bug.
Since ChatGPT was launched, it has seen meteoric growth. Millions of people are using the software for activities ranging from developing architectural designs to writing essays and drafting messages, songs, novels and jokes.
It has also sparked an AI race among other tech firms and venture capitalists. Google is rushing out its own chatbot, called Bard, and investors are pouring cash into all manner of AI projects.
But critics have long fretted over where ChatGPT and its competitors get their data or how they process it.
“We actually don’t know how the data is used because there’s not enough information given to the public,” Ruta Liepina, an AI fellow at the University of Bologna in Italy told Al Jazeera.
“At the same time at the European Union, there are a lot of new regulations being proposed, but it will be a matter of how they are enforced and how much the companies collaborate in showing information that is needed to better understand how these technologies are working,” Liepina said.
The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writing they have ingested.
Some public schools and universities around the world have blocked the ChatGPT website from their local networks over student plagiarism concerns, but it was not clear how Italy would block it at a nationwide level.
The move is unlikely to affect applications from companies that already have licences with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.
This week, hundreds of experts and industry figures signed an open letter calling for a pause in the development of powerful AI systems, arguing they posed “profound risks to society and humanity”.
The letter was prompted by OpenAI’s release this month of GPT-4, a more powerful version of its chatbot, with even less transparency about its data sources.
The Italian watchdog ordered OpenAI to report within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either $22m or 4 percent of its annual global revenue.
AI experts said it is likely that more governments will follow suit and issue similar regulations.
“I think there might be some follow-up from other countries, [especially] if the OpenAI company doesn’t provide more information of how the algorithm is trained,” Liepina said.
The San Francisco-based company’s CEO, Sam Altman, announced this week that he would embark on a six-continent trip in May to talk about the technology with users and developers.
His trip is to include a stop in Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools.
Altman said his time to Europe would also include stops in Madrid, Munich, London and Paris.