Privacy group files complaint after ChatGPT invents “defamatory” child murder story

Share This Post


Privacy group Noyb, filed a complaint today against OpenAI for “allowing” its model to invent a “defamatory” child murder story about a Norwegian user.  

Noyb complained to the Norwegian data protection authority, accusing OpenAI of not respecting the EU’s General Data Protection Regulation (GDPR), by not allowing users to correct incorrect personal information about them in ChatGPT’s large language model.

This is the second such complaint by Noyb. In a first case, the privacy group requested that OpenAI correct or erase wrong personal data about a public figure. The company argued that they were unable to do so, and offered instead to block data for certain prompts. ChatGPT also displays a disclaimer to its users, specifying that some replies may be incorrect.

This time, a Norwegian user, Arve Hjalmar Holmen, complained that ChatGPT made up a story about him murdering his own children and being sentenced to years in prison. However, parts of the Norwegian user’s story were based on real personal data, including the number and gender of his children, and the name of his home town.

Generative AI chatbots provide responses by calculating the most likely next word in replying to prompts, and therefore risk sharing false information or creating fake stories. This is called ‘hallucinations.’

The mixture of true and false information can make stories particularly believable. Hallucinations can “have catastrophic consequences for people’s lives” Noyb’s press release reads.

Considering that ChatGPT’s reply entails personal data, users can request that the information be corrected, Noyb argues. “The GDPR is clear. Personal data has to be accurate. And if it’s not, users have the right to have it changed to reflect the truth” said data protection lawyer at Noyb Joakim Söderberg.

ChatGPT was updated since the incident and now has the ability to search the internet, which limits the likelihood that it may invent facts. However, false information still exists in the AI system, and user data feeds back into the system to train itself, according to Noyb.

“The fact that someone could read this output and believe it is true, is what scares me the most,” Hjalmar Holmen said.





Source link

Related Posts

OMB suggests NOAA scale back plans for geostationary satellites

SAN FRANCISCO – A White House budget proposal...

Elon Musk Reportedly Now Privately Admitting He’s Out of His Depth

Who could have seen this one coming?DOGE DaysElon...

Shopify must face data privacy lawsuit in US

A US appeals court on Monday revived a...

Fastest VPN 2025: Speedy performers ranked

When looking for a top-notch VPN, the two...
- Advertisement -spot_img