The EU AI Act is Approved but It's Not Done Yet - A Breakdown of Some Interesting Details
Text of the legislation won't be available until 2024 and compliance for two years
A deal on the EU AI Act was reached just before midnight in Brussels today by negotiators of the EU Trilogue process. However, the text will not be released until sometime in early 2024, and it still requires final ratification by the European Parliament. The European Parliament, Council, and Commission each drafted and accepted provisional versions of the Act. That was followed by the Trilogue, a process used to harmonize the legislation and resolve conflicting details before returning to the governmental bodies for final approval.
In a press conference following the agreement, Thierry Breton of the European Commission said the draft regulations for transparency and governance requirements should be published in about 12 months, with the final details going into effect around 24 months. These are not firm timelines, but that is the current expectation. That means the EU AI Act is unlikely to become law until 2026, which is later than the 2025 time frame many had anticipated.
Perspectives from Key Negotiators
Dragos Tudorache, a Romanian Minister of the European Parliament (MEP) commented:
We did deliver a balance between protection and innovation. Not many people thought that this was possible. We were always being questioned whether there is enough protection or whether there is enough stimulant for innovation in this text. And I can say this balance is there. We have safeguards.
During the media question segment, it was apparent that everyone wanted to talk about safety, but the parliamentarians were particularly focused on those rules. On the other end of the spectrum was the Commission’s Breton, which emphasized safeguards but much more about regulation that doesn’t undermine innovation and may stimulate it. A statement released by the Europan Parliament led off with safety and concluded with the penalties.
Banned applications
Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:
biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
predictive policing;
untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
emotion recognition in the workplace and educational institutions;
social scoring based on social behaviour or personal characteristics;
AI systems that manipulate human behaviour to circumvent their free will;
AI used to exploit the vulnerabilities of people (due to their age, disability, social or
…
Sanctions
Non-compliance with the rules can lead to fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.
Secretary of State for digitalisation and artificial intelligence Carme Artigas, commented during the press conference.
Of course, I mention three things that we consider key, win-win in this negotiation:
The uses of AI for development and research is out of scope.
Open-source has very limited and light requirements
And the ecosystem boost of a lot of incentives
…
This legislation is future proof. One of the challenges we have is how can we cope with this regulation wth the contastant changes of technology? How can we solve this? Because the law itself has updating mechanisms according to the advances of technology.
This is particularly significant as there has been considerable discussion about how the design of the original legislation may undermine the viability of open-source generative AI foundation models. The governmental negotiators believe they have created a path for open-source success, though it will be important to hear from the open-source community on how feasible compliance will be.
The “updating mechanisms” are an interesting element of the final legislation. You could tell in the comments from
A statement released by the European Council’s Breton included additional details:
The AI Act is much more than a rulebook—it's a launchpad for EU startups and researchers to lead the global race for trustworthy AI.
…
Highlights of the trilogue
Large AI Models (e.g. GPT4)
With today's agreement, we are the first to establish a binding but balanced framework for large AI models (“general-purpose AI models”), promoting innovation along the AI value chain.
We agreed on a two-tier approach, with transparency requirements for all general-purpose AI models and stronger requirements for powerful models with systemic impacts across our EU Single Market.
For these systemic models, we developed an effective and agile system to assess and tackle their systemic risks.
During the trilogue we carefully calibrated this approach, in order to avoid excessive burden, while still ensuring that developers share important information with downstream AI providers (including many SMEs). And we aligned on clear definitions that give legal certainty to model developers.
Protecting fundamental rights
We spent a lot of time on finding the right balance between making the most of AI potential to support law enforcement while protecting our citizens' fundamental rights. We do not want any mass surveillance in Europe…
During the trilogue, we defined the specifics of this risk-based approach. In particular, we agreed on a set of well-balanced and well-calibrated bans, such as real-time facial recognition, with a small number of well-defined exemptions and safeguards.
We also defined various high-risk use cases, such as certain uses of AI in law enforcement, workplace and education, where we see a particular risk for fundamental rights. And we ensured that the high-risk requirements are effective, proportionate and well-defined.
Innovation
We developed tools to promote innovation. Beyond the previously agreed regulatory sandboxes, we aligned on the possibility to test high-risk AI systems in real-world conditions (outside the lab, with the necessary safeguards).
A media question about whether GPT-4 was the only model that would fall under the regulation due to the FLOP (floating point operations) threshold. The difference between low-tier and high-tier appears to be 10-20 FLOPs versus greater than 20 FLOPs. The speakers did not clarify what vendors would be impacted by this. However, in answering a separate question about Mistral, Artigas suggested the company is in the R&D phase and, therefore, would not be impacted today and likely would be in the lower tier in the future.
However, FLOPs will not be the only metric determining whether a company qualifies for regulation under the low and high tiers. Artigas mentioned that the number of users and other attributes may also determine the designation.
Artigas answered a question about where chatbots fall in the regulatory system: “Chat can be considered a general-purpose system, but if a general-purpose system is not considered a high-risk system, the designation does not apply.” So, it will be based on the system and the use case. Breton added that “chatbots will be subjected to transparency requirements.”
Echos of GDPR
This significance of the EU AI Act goes beyond Europe’s. Aside from representing a large economic market with a population of 450 million, the EU already has a history of setting the regulatory threshold for technology. GDPR is the most salient example. GDPR was stricter than other consumer privacy notification and consent regulations, and it simply became easier for everyone to focus on complying with the EU. It became the de facto compliance standard.
Outside China, the EU AI Act was expected to be the strictest of the major economic markets and fill a similar role. However, the U.S. White House’s executive order may turn out to be more restrictive when it is implemented next year. It is also likely to go into effect faster. Unless it faces delays, rules will be published in 2024 and are expected to go into effect in 2025.
It seems the business community made a strong case to the European political apparatus because there were many comments about protecting and fostering innovation while also regulating. In other words, the politicians wanted everyone to know that they would not stifle innovation in Europe and hoped the regulatory certainty would benefit businesses while still protecting the rights of EU citizens. Whether that is just messaging or it turns out to be true will be determined over time.
Still, voluntary compliance is more than a year away, and legal requirements are likely more than two years. I don’t expect this to make much of a difference for the industry until 2025 at the earliest. The only tangible impact in the interim is likely to be funding, or lack thereof, for technologies in the banned categories.