Breaking News: Canada Opens ChatGPT Investigation, EU Interest in Italian Action Grows
France's privacy watchdog received new complaints, Germany and Ireland are in contact with the Italian GDPD
The Office of the Privacy Commissioner of Canada announced yesterday that it had opened an investigation into OpenAI. Unlike in Italy, this announcement was not accompanied by a request that OpenAI ceases ChatGPT services in Canada. According to the very brief notice on OPC’s website:
The Office of the Privacy Commissioner of Canada has launched an investigation into the company behind artificial intelligence-powered chatbot ChatGPT…
The investigation into OpenAI, the operator of ChatGPT, was launched in response to a complaint alleging the collection, use and disclosure of personal information without consent.
As this is an active investigation, no additional details are available at this time.
More EU Nations Likely to Follow Italy
An AFP report earlier today that was republished widely stated that France’s privacy regulator had received two complaints about ChatGTP but had not yet initiated an investigation.
France's data regulator said on Wednesday it had received two complaints about the AI program ChatGPT, as European authorities deepened their scrutiny of the chatbot days after Italy banned it…
Zoe Vilain of Janus International, a campaign group, filed the first complaint. "We are not anti-tech, but we want ethical technology," she told AFP. She wrote in her complaint that when she tried to sign up for a ChatGPT account she was not asked for consent to any general terms of use or privacy policy.
The other complaint came from David Libeau, a developer who wrote in his submission he had found personal information about himself when he asked ChatGPT about his profile.
"When I asked for more information, the algorithm started to make up stories about me, creating websites or organising online events that were totally false," he wrote.
Neither of these complaints mentioned the security vulnerability OpenAI admitted to in March, which potentially exposed some private user data. The Italian privacy agency GDPD indicated that the security issue was the catalyst for the investigation, though it also mentioned concerns about false information and the use of private data for training AI models.
The first complaint to the French CNIL raises concerns about privacy protection and terms of use disclosures. The second complaint focuses on incorrect information, which is the widely known problem of “hallucinations.” On the latter point, the ChatGPT landing page warns the user that false information may be generated.
CNIL has not launched the investigation, so we do not know if it will look further into these complaints. However, these complaints may indicate a broader scope of inquiry into ChatGPT and OpenAI.
The German commissioner for data protection, Ulrich Kelber, told the Handelsblatt newspaper that Germany could follow in Italy's footsteps by blocking ChatGPT if needed. He said that Germany had requested more information from Italy on its ban. Privacy regulators in France and Ireland have also reached out to counterparts in Italy to find out more about the basis of the ban.
The Bandwagon Effect
Many people are referring to these developments as the domino effect, which is the chain reaction set off by a single event. However, the bandwagon effect is more apropos because it acknowledges the social pressure and cognitive biases at play.
Every privacy regulator is sure to receive some pressure, formally or socially, to look into this further. The activities may still be founded in their mission or altruism, but you cannot ignore the presence of social pressure and benefits from acting. Here is how I described the situation in a LinkedIn post earlier today.
To be clear, not everyone is following the same path today. According to U.S. News, “The privacy regulator in Sweden, however, said it had no plan to ban ChatGPT nor was it in contact with the Italian watchdog. Spain's regulator said it had not received any complaint about ChatGPT but did not rule out a future investigation.”
It may be wholly appropriate for all of these investigations to move forward. The point here is that the merits of any particular claim are less relevant than the “facts on the ground.” It’s happening. The question is how well OpenAI will weather the storm.
The most likely outcome will be a modest fine and some specific changes made by OpenAI to come into compliance if it remains in violation, combined with some other voluntary steps to mollify regulators. OpenAI and Microsoft will want to be viewed by European regulators as good corporate citizens and try to resolve these issues quickly. Generative AI adoption is just getting started, and there is no reason to be cut out of a giant market when you have an early lead over competitors.
A sticking point will arise if regulators press on the “hallucination,” information sourcing, or model training data concerns. These problems are endemic to ChatGPT’s AI model training and architecture, so there is little OpenAI can do to address them besides user notification.