The Push Back Against Generative AI Alarmists and Anti-Open-Source Regulation Has Begun
Mozilla, Yann LeCun, and Andrew Ng all take action to shift the debate
The AI alarmists have controlled the narrative around AI safety for some time. While there are plenty of people, including some AI alarmists, who extol the benefits of generative AI and foundation models, when it comes to AI safety, the existential risk narrative dominates headlines. Elon Musk, Geoff Hinton, Sam Altman, and others with high profiles in business and science give the media what they want: incendiary headlines. But at what cost?
The Anti-Alarmist Response
Yann LeCun, one of three Turing Award winners, along with Geoff Hinton and Yoshua Bengio, and the chief AI scientist at Meta has begun speaking out more forcefully about the downside of AI alarmism.
LeCun is talking about open AI, which is open-source or open models, and not the OpenAI led by Sam Altman. His comments about the “silent majority” and “vocal exceptions” are backed by evidence. The 2022 Expert Survey on Progress in AI found that “The median respondent [of 738 AI researchers] believes the probability that the long-run effect of advanced AI on humanity will be ‘extremely bad (e.g., human extinction)’ is 5%.”
This past week, LeCun was joined by Andrew Ng, the former Baidu chief scientist and co-founder of Google Brain and Coursera, Clem Delangue of Hugging Face, and Mozilla. Ng commented in a blog post this week:
In recent months, I sought out people concerned about the risk that AI might cause human extinction. I wanted to find out how they thought it could happen. They worried about things like a bad actor using AI to create a bioweapon or an AI system inadvertently driving humans to extinction, just as humans have driven other species to extinction through lack of awareness that our actions could have that effect.
When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.
Such overblown fears are already causing harm. High school students who take courses designed by Kira Learning, an AI Fund portfolio company that focuses on grade-school education, have said they are apprehensive about AI because they’ve heard it might lead to human extinction, and they don’t want to be a part of that. Are we scaring students away from careers that would be great for them and great for society?
I don’t doubt that many people who share such worries are sincere. But others have a significant financial incentive to spread fear:
Individuals can gain attention, which can lead to speaking fees or other revenue.
Nonprofit organizations can raise funds to combat the phantoms that they’ve conjured.
Legislators can boost campaign contributions by acting tough on tech companies.
I firmly believe that AI has the potential to help people lead longer, healthier, more fulfilling lives. One of the few things that can stop it is regulators passing ill-advised laws that impede progress. Some lobbyists for large companies — some of which would prefer not to have to compete with open source — are trying to convince policy makers that AI is so dangerous, governments should require licenses for large AI models. If enacted, such regulation would impede open source development and dramatically slow down innovation.
Hugging Face CEO Clem Delangue commented on LinkedIn:
I’m in favor of more research on future catastrophic risks of AI but let’s make sure it doesn’t lead in the short-term to regulatory capture or blinds us from looking at current important challenges like biases, misinformation, lack of transparency, concentration of power,…
To cap it off, Mozilla issued the Joint Statement on AI Safety and Openness. The letter was signed by LeCun, Ng, Arthur Mensch of Mistral AI, Irina Rish of MILA, Nobel Laureate Maria Ressa, and hundreds of others. It opens with:
We are at a critical juncture in AI governance. To mitigate current and future harms from AI systems, we need to embrace openness, transparency, and broad access. This needs to be a global priority.
Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers. However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst.
Further, history shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation. Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.
Key Points of the Anti-Alarmists
The arguments from the AI alarmists generally boil down to AI superintelligence is inevitable, and once machines are more intelligent than people, there is a significant risk of human extinction. The anti-alarmist camp focuses on counterarguments such as:
There is no evidence that AI superintelligence is possible, much less inevitable.
There is no evidence that AI would lead to an extinction event even if it did achieve superintelligence.
Most AI scientists do not believe an AI-led extinction event is likely, even among those who do believe AI superintelligence is possible.
The alarmism is misinforming the general public about the likelihood of existential risk, and policymakers are basing new regulatory schemes on extreme and unlikely scenarios.
Many alarmists have financial incentives to promote greater regulation to reduce competition, raise their professional profiles, or raise money for organizations or politicians.
Current draft regulations driven by fear are biased toward strict controls, which will lead to the concentration of AI capabilities among the few.
Centralizing AI model access creates a higher likelihood that a bad actor could use the technology against a defenseless population that is unable to use comparable technology for protection.
Rules related to AI model access control are likely to severely handicap or eliminate open-source AI foundation models, which will reduce market choice and innovation.
The focus on catastrophic events draws attention away from addressing existing risks posed by AI technology.
Regulatory and Attention Risk
The previously one-sided and now two-sided debate is taking place against the backdrop of rising regulation. The EU AI Act is currently in final negotiations. The U.S. White House just issued a Presidential Executive Order. Both legal actions suggest that control of AI model access will be an important element of the final rules, which will have second-order effects on open-source AI development.
LeCun has stepped up his efforts beyond social media with recent interviews for The Financial Times and Business Insider. Headlines of those articles, respectively, were “AI Will Never Threaten Humans, says Top Meta Scientist” and “AI One-Percenters Seizing Power Forever is the Real Doomsday Scenario, Warns AI Godfather.”
Preserving opportunities for open-source AI model development is a key objective for many anti-alarmists. The argument contends that broader access to the technology will make it harder for a privileged few to control and potentially misuse the technology. In addition, it will ensure the technology can be employed to defend against catastrophic events.
Another key concern is that a singular focus on singularity risks diverts attention from addressing practical problems caused by AI technology. Next to an extinction event, most problems seem relatively minor. However, those problems are right in front of us and could be addressed with focused attention.
From Science to Policy
Society expects scientists to create innovation and to share objective, evidence-based information with policymakers. Whatever side of this issue you find yourself, you likely prefer a reasoned debate to a one-sided debate.
You may have noticed through my written and public statements that I am skeptical of some of the claims made by scientists that appear to follow basic logic but present no scientific evidence. I am also concerned about the embrace of regulation to stifle competition and slow innovation by people who would benefit from this development. And I am even more concerned about the nefarious use of AI by bad actors than spontaneous consciousness emerging within machines.
At the same time, I don’t see anything wrong with considering cataclysmic, low-probability scenarios. My caution is to not act rashly based on fear without regard to unintended consequences. The proclamation of certitudes from scientists is common. But hypotheses are often wrong. Be careful about blindly following the fears of a vocal minority while also maintaining openness to evidence supporting massive threats.
The Expert Survey above found a median expectation of 37 years until superintelligence. It could be sooner. It could be later. It might never happen. Even if it does happen, there is no guarantee that it will be catastrophic. The good news is that the rise of the anti-alarmists suggests we might have a two-sided debate that policymakers can consult. Let’s hope they make the right decisions.
“I firmly believe that AI has the potential to help people lead longer, healthier, more fulfilling lives. One of the few things that can stop it is regulators passing ill-advised laws that impede progress.” Andrew Ng
In a recent conversation with Yannis Agiomyrgiannakis, CEO of Altered AI and a former Google research scientist who was one of the developers of Tacotron, we discussed the downside risks of regulating open-source AI out of existence. We concluded that Safe, Open, and Sensible AI was the right way to think about the voluntary and regulatory approach to AI risk.
Let me know what you think in the comments. Where do you come out on this debate? Are the anti-alarmists diverting attention from a certain catastrophe and undermining readiness? Are the alarmists leading us down a path of negative unintended consequences and control of AI by the few?
IMO this response was long overdue. I’m really happy heavyweights like LeCun and Andrew Ng have stept in, calling out industy leaders on what can only be descibed as fear-mongering.