OpenAI has proposed measures to address data privacy concerns that led to the temporary ban on ChatGPT in Italy. The Associated Press reports.
During a video briefing between officials from the supervisory authority and OpenAI’s leadership, the company pledged to outline ways out of the situation. The parties did not disclose specific privacy protections.
The Italian regulator said it does not wish to hinder AI development. However, authorities stressed that OpenAI must comply with EU data-privacy rules.
At the end of March the Italian data protection authority ordered to block the chatbot within the country. The temporary ban was imposed amid a leak of payment details of subscribers to the premium version of ChatGPT.
The regulator also voiced concerns about the legality of collecting Italians’ data for training the neural network and the algorithm’s ability to generate false information.
Regulators in Europe and other countries have not ruled out taking similar measures against OpenAI. In Ireland’s Data Protection Commission they said they were ‘in contact with the Italian regulator to understand the basis for their actions’. They pledged to coordinate actions with colleagues.
France’s data privacy regulator said it had opened an investigation after receiving two complaints about ChatGPT.
UK authorities warned that developers of generative AI “must consider their data-protection obligations from the outset” and build appropriate mechanisms.
“There is no choice: if you process personal information — that is the law,” the UK Information Commissioner’s Office said.
Canada’s authorities also launched an investigation into OpenAI after receiving a complaint about alleged “collection, use and disclosure of personal data without consent”.
In March, Europol warned of potential misuse of ChatGPT in phishing, disinformation and cybercrime.
In the same month, a technology-ethics group urged the US Federal Trade Commission to conduct an investigation into OpenAI.
