LAION Calls for Less Regulation on Open-Source Generative AI in Europe
Concern may be rising about the EU AI Act's impact on open-source models
LAION (Large-scale Open AI Network), the non-profit focused on creating open-source AI foundation models and datasets, issued an open letter calling on the EU to reduce regulatory hurdles for open-source AI in Europe. Named “Towards a Transparent AI Future,” the letter outlines why open-source AI models provide societal benefit and should be treated differently than closed-source proprietary models.
Open-source AI models offer enhanced security, explainability, and robustness due to their transparency and the vast community oversight. They promote environmental sustainability by minimizing redundant training and fostering re-use, and serve as a catalyst for innovation, especially for small and mid-sized enterprises. To leverage these multifaceted benefits and uphold European sovereignty in AI, it is recommended that the EU Parliament incentivizes open-source releases of AI models.
The organization is best known for its large datasets of images that were used to train text-to-image models such as Stable Diffusion. LAION 5B includes 5.85 billion image-text pairs. Yann Lecun of Meta, Irina Rish of MILA, ELLIS (The European Learning Lab for Intelligent Systems), and other AI researchers all endorsed the letter.
Concerns About the EU AI Act
This is not LAION’s first letter. Earlier this year, the organization issued a similar open letter detailing its primary concerns.
The Importance of Open-Source AI
The letter outlines three main reasons why open-source AI is worth protecting:
Safety through transparency: Open-source AI promotes safety by enabling researchers and authorities to audit model performance, identify risks, and establish mitigations or countermeasures.
Competition: Open-source AI allows small to medium enterprises to build on existing models and drive productivity, rather than relying on a few large firms for essential technology.
Security: Public and private organizations can adapt open-source models for specialized applications without sharing sensitive data with proprietary firms.
Concerns with the Draft AI Act
The draft AI Act may introduce new requirements for foundation models, which could negatively impact open-source R&D in AI. The letter argues that "one size fits all" rules will stifle open-source R&D and could:
Entrench proprietary gatekeepers, often large firms, to the detriment of open-source researchers and developers
Limit academic freedom and prevent the European research community from studying models of public significance
Reduce competition between model providers and drive investment in AI overseas
The EU AI Act is in its Trilogue Process, where the EU Parliament, the European Commission, and the Council of the European Union negotiate the final terms of the agreement. The negotiations are expected to conclude before the end of the year, so the timing is aligned with hoping to influence the process.
Incentives or Less Regulation?
The letter’s headline focuses on minimizing regulation, at least for open-source AI models. However, the letter itself vaguely suggests creating incentives. It is unclear what incentives the organization favors. A lower regulatory burden for open-source models would be an incentive.
Stanford’s HAI did an analysis of large language models (LLM) earlier this year, and it is clear the text generation solutions are not ready for the EU AI Act today. Identifying and excluding copyrighted data along with data transparency is going to be difficult.
HAI found partial compliance across the board. However, when it comes to regulation, partial credit often means no credit. In addition, compliance is likely to be complex, time consuming, and costly. That will favor the large companies with deep budgets, proprietary technology, and a robust revenue model. Open-source projects don’t have these resources and it is one reason the organziation is calling for special status for open source.