OpenAI hit with another privacy complaint over ChatGPT’s love of making stuff up


OpenAI faces a privacy complaint in Austria , which means none of your work. The complaint alleges that the company’s ChatGPT bot repeatedly provided false information about a real person (who is not named in the complaint for privacy reasons). . This may violate EU privacy regulations.

Instead of saying it didn’t know the answer to the query, the chatbot allegedly spat out incorrect date of birth information for the individual. Like politicians, AI chatbots like to make things up with confidence and hope we don’t notice. This phenomenon is called hallucination. However, it’s one thing for these bots to prepare ingredients for a recipe, and quite another for them to invent things about real people.

The OpenAI declined to help remove the false information, stating that such a change was technically not possible. The company offered to filter or block data on certain requests. OpenAI’s privacy policy states that if users find that an AI chatbot is generating “factually inaccurate information” about them, “correction request” but the company says “it may not be possible to correct the inaccuracy in any case” .

It’s more than just a complaint, as the chatbot’s propensity to fix things may run afoul of the region’s General Data Protection Regulation (GDPR). . EU residents have rights in relation to personal data, including the right to have false data rectified. Failure to comply with these regulations can result in serious financial penalties of up to four percent of global annual turnover in some cases. Regulators can also order changes to the way data is processed.

“It is clear that companies currently fail to make chatbots like ChatGPT compliant with EU law when processing data about individuals,” NOYB data protection lawyer Maartje de Graaf said in a statement. “If a system cannot provide accurate and transparent results, it cannot be used to generate information about individuals. “Technology must follow legal requirements, not the other way around.”

The complaint also raised concerns about transparency by OpenAI, suggesting that the company should not offer information about where the data it creates about individuals comes from or whether it stores that data indefinitely. This is especially important when dealing with personal data.

Again, this is a complaint from an advocacy group and EU regulators have yet to comment one way or the other. However, OpenAI ChatGPT “sometimes writes answers that sound convincing but are wrong or make no sense.” NOYB approached and asked the institution to investigate the matter.

The company faces a similar complaint in Poland, where there is a local data protection authority After the researcher failed to get help from OpenAI in correcting false personal data. That complaint accuses OpenAI of multiple violations of the EU’s GDPR on transparency, access rights and privacy.

There is also Italy. Italian data protection authority and OpenAI, which concluded by saying it believed the company violated GDPR in various ways. This includes ChatGPT’s tendency to make false information about people. Chatbot Before OpenAI makes certain changes to the software, for example, new alerts for users and the option to opt out of chats to train algorithms. Although no longer banned, the Italian investigation into ChatGPT continues.

OpenAI did not respond to this latest complaint, but did respond to a regulatory salvo issued by Italy’s DPA. “We want our AI to learn about the world, not individuals” . “We actively work to reduce personal information in the training of our systems, such as ChatGPT, which rejects requests for personal or sensitive information about people.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *