
ChatGPT’s ‘Hallucinations’ Spark EU Privacy Complaint
The human rights group NOYB has lodged a complaint with the Austrian regulatory authority against OpenAI for providing incorrect information via the ChatGPT chatbot, potentially breaching EU privacy rules, reports Reuters.
The complainant, represented by NOYB, asked ChatGPT about their birthday. However, the chatbot repeatedly provided false information instead of indicating the absence of the necessary data.
Following the complainant’s inquiry, OpenAI allegedly refused to correct or delete the inaccuracies. The company stated it was unable to make corrections and did not disclose information about data processing, sources, or recipients.
NOYB is calling for an investigation into OpenAI’s potential violations of the GDPR. The group has urged for enhanced measures to ensure the accuracy of personal data processing in the operation of large language models.
NOYB lawyer Maartje de Graaf emphasized that companies have yet to compel chatbots like ChatGPT to comply with EU laws when handling personal information.
“If the system cannot provide accurate and transparent results, it should not be used to obtain data about private individuals. Technology must adhere to legal requirements, not the other way around,” she asserts.
Previously, OpenAI acknowledged ChatGPT’s tendency to produce plausible but incorrect answers, a phenomenon known as “hallucinations” in language models. The company considers the issue challenging to resolve.
In September 2023, the Polish Data Protection Office initiated an investigation into ChatGPT following a complaint.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!