OpenAI, denounced in Europe because ChatGPT 'hallucinates' and lies about people

In the history of the defense of data privacy in Europe there is an entity called noyb (in lowercase, acronym in English for “none of your business”), founded by the Austrian Max Schrems, which has scored several victories in the courts against technology giants and has overturned the treaties between the European Union and the United States a couple of times for the transfer of data between companies in both territories.

Oliver Thansan
Oliver Thansan
29 April 2024 Monday 10:28
5 Reads
OpenAI, denounced in Europe because ChatGPT 'hallucinates' and lies about people

In the history of the defense of data privacy in Europe there is an entity called noyb (in lowercase, acronym in English for “none of your business”), founded by the Austrian Max Schrems, which has scored several victories in the courts against technology giants and has overturned the treaties between the European Union and the United States a couple of times for the transfer of data between companies in both territories. This entity has now launched a new battle against OpenAI because ChatGPT is wrong with its hallucinations and gives false data about people without the possibility of correcting them.

The EU General Data Protection Regulation requires that information about individuals is accurate and that they have full access to the stored information as well as being able to know the sources. ChatGPT, when it offers false data about specific people, does not have an option to change it nor does it offer information about the sources, so yesterday noyb filed a complaint with the DSB, the Austrian data protection authority (its resolutions are valid in the rest of EU territory) against OpenAI.

Maartje de Graaf, a lawyer at noyb, noted in a statement that “making up false information is quite problematic in itself. But when it comes to false information about people, the consequences can be serious. The lawyer pointed out that “if a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. “Technology has to follow legal requirements, not the other way around.” Noyb mentioned an article from The New York Times (a newspaper that has also denounced OpenAI for using its articles without permission to train AI) that explains that AI bots invent information between 3% and 27% of the time. .

Noyb's attempts to correct misinformation provided by ChatGPT went unaddressed. The argument was that it was not possible to do so. The GDPR guarantees the right of users to demand that any company provide them with a copy of the personal data they have about them, but Open AI did not provide any of what they requested. noyb's complaint asks the Austrian data protection authority to investigate OpenAI's data processing and the measures it takes to ensure the accuracy of personal data processed in the large language models it uses to power ChatGPT.