
OpenAI finds itself back in the regulatory crosshairs as European privacy regulators target ChatGPT’s habit of making up personal details. Digital rights organisation Noyb has submitted a new complaint in Norway after the AI made a false statement that a man, Arve Hjalmar Holmen, had been convicted of killing two of his children. The incident highlights increasing concerns about AI disinformation and adherence to the EU’s stringent General Data Protection Regulation (GDPR).
Even though OpenAI has a disclaimer that ChatGPT can make mistakes, Noyb contends that the company’s notice does not free it of legal requirement to comply with GDPR, which states that personal information must be correct. The company also contends that OpenAI has no mechanism whereby individuals can make incorrect data made by the AI correct.
It is not a one-off. Other public personalities are also reported to have been misinterpreted in scandals by the chatbot, sparking doubts regarding its credibility and ethical considerations. Recent GDPR breaches have already put a financial price on OpenAI, with a 15 million Euros fine being issued by Italy for the illegal processing of data. However, official intervention has been slow. An earlier Noyb-supported complaint in Austria was transferred to Ireland’s Data Protection Commission (DPC), where it still has not been addressed.
Noyb says OpenAI cannot simply 'hide' wrong data while still working with it internally. The lawsuit also questions OpenAI's assertion that only its Irish wing is in charge of GDPR compliance and demands action from the Norwegian authorities.
The case sheds more light on the way OpenAI handles personal data as AI-generated content proliferates on the web. It is a growing area of conversation regarding the liability of AI as regulators find it difficult to keep pace with the rapidly changing technology.