8 Comments
Apr 1, 2023Liked by Bret Kinsella

Always on point, Bret, I was just thinking how long this news would reach U.S.

IMHO, I would say that the VPN option is the more probable one. There are tons of of reasons why Italians should protest, and we don't, for many reasons that I won't address here. I hope to turn out to be wrong! There are a lot of sad and disappointed comments on social media in these hours, stating more or less the same concept: you don't govern with fear.

Banning, limiting, preventing without knowing what your are talking about is not a smart move.

Let's see the bright side:

1. it's time to sit down at a round table and talk about regulations globally, maybe the emerging EU AI Act could be a starting point,

2. this ban sets a precedent, and huge opportunity for authorities & companies to create open source models, as well.

Expand full comment
author

Interesting perspective. Thank you for commenting!

My instinct is subverting the rule by using VPN will be the preferred route. There is a cost if you don't have VPN today but it is the path of least resistance, you can gain access again immediately, and the solution is assured unlike social pressure. Or, there are other AI writing assistants based on GPT-3.5 and the ChatGPT turbo model that may be an option. Of course, those will be paid options, so VPN or another service, Italians will now have a cost to use an AI writing assistant.

I didn't include this in the post, but when dealing with government regulators you always have to be careful about not offering a pretext to take action. My assumption is that there was already concern about ChatGPT for a variety of reasons. That could be concerns about its training data, to "safety" and the infinite personal opinions about what is acceptable, threats of misinformation, the impact on jobs, or the "imposition" of another popular software package with foreign origins. The risks posed by these concerns may have been too ambiguous to act.

However, the acknowledged security breach then becomes the pretext to take action. It is unambiguous and therefore officials can justify an investigation. This then creates a scenario where the other concerns can also be evaluated or the agency can, at least, begin extracting a toll to do business in their country in the form of fees.

I am sure that these concerns are genuine for the regulators and additional oversight into the nature of the breach and compliance with cybersecurity laws and guidelines is not a bad idea. The question is whether the security vulnerability was an egregious error with calculable damages and high enough ongoing risk to demand a cessation of operations; or, would the opening of an investigation have sufficed?

However, if you already harbor concerns, you are more likely to overreach in the short-term.

Expand full comment
Apr 1, 2023Liked by Bret Kinsella

Thanks for replying, Bret.

I get your point, probably we didn't see it coming, but under the hood there was a sort of 'suspicion' on chatgGPT.

This is an article from Politico on 3rd March (https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/)

"The rise of ChatGPT is now forcing the European Parliament to follow suit. In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale."

So probably it was only a matter of time...

Just to be precise: as an Italian user, at this moment, OpenAI's playground is still available, and you can make APIs' call to GPT.x models.

Expand full comment
author

Thanks for the clarification. That makes sense that the APIs are still available given the action was specific to ChatGPT and related to the security breach which I think OpenAI contests only impacted that service.

With that said, the concerns about hallucinations, lack of repeatability of the results, and age gating mechanisms could all apply directly to the playground. We could also see these same complaints issued against application provides that leverage OpenAI APIs. So, is this a one-and-done investigation to put OpenAI and other generative AI model developers on notice or is it the first domino to fall?

Expand full comment
Apr 1, 2023Liked by Bret Kinsella

Great question! I think that starting from today, CEOs in Generative AI & co. are keeping an eye on what's going on in Italy to take notes on what security measure will be required in specific markets. I would say Italian Garante is *not* interested in putting an halt on OpenAI itself or sending 'indirect messages' to tech giants.

The concerns about hallucinations: I don't understand why Garante complains about it, since

1) it is out of his scope investigating on the quality of the results, (I'd like the opinion of a lawyer on this point to validate this assumption)

2) it is clearly stated by OpenAI that this technology may give wrong and biased responses 'by design'.

Expand full comment
author

100% on your two points.

And, by the way Google search is sometimes wrong and not repeatable, major media organizations publish false information, and government websites often have incorrect information. I would suspect you could find false information on the Garante website in short order.

If people were really concerned about false information in results, there would be little demand for the services. What people seem to be more worried about but don't know enough to express it cogently is the fear that generative AI models will help people more persuasively formulate false arguments.

Expand full comment
Apr 1, 2023Liked by Bret Kinsella

This is a real issue, for sure, and it started years ago, maybe even before the rise of WWW.

But again, we go back to the point: some guidelines have to be traced out if we want to keep together data protection, security, fair use and technology. Otherwise it seems that we are we going to close the barn door after the cows have gotten out.

I would like to know OpenAI's next action, if they are willing to reply to Garante or leave the whole thing out of the door. That would be also an indicator.

Expand full comment
deletedApr 1, 2023·edited Apr 1, 2023
Comment deleted
Expand full comment
author

I agree that the interest in more transparency about how the solutions work and what is inside is a motivating factor. I suspect the investigators will be disappointed. The AI engineers that build these solutions have little insight into this themselves. Neural systems are unlike previous generations of software where you could actually look inside and have a good idea of what was there and what was likely to come out during use.

Expand full comment