China Lays Down the Law on Deepfakes and Generative AI
New laws could influence regulation in the U.S. and Europe
The Wall Street Journal reported today on the new rules going into effect in China next week related to “deepfakes” and, more specifically, what the Chinese government calls “deep synthesis.” You will see most of the headlines talking about deepfakes, but Synthedia readers should be aware that the term is used by ill-informed journalists that use it as a catch-all for anything generative AI. Yes, deepfakes are covered, and a whole lot more.
The rules were first published on December 11th and will go into effect at the end of the month. Reporter Karen Hao wrote yesterday:
“Beijing’s internet regulator, the Cyberspace Administration of China, will begin enforcing the regulation—on what it calls “deep synthesis” technology, including AI-powered image, audio and text-generation software—starting Tuesday, marking the world’s first comprehensive attempt by a major regulatory agency to curb one of the most explosive and controversial areas of AI advancement.”
Everything from ChatGPT to Stable Diffusion and voice clones to digital people is likely covered by the broad definition. Of course, you probably know the generative AI technology vendors of the west. In China, Bytedance, Tencent, and Baidu all have multiple solutions along with a host of independent companies. These will be the first targets of the new laws.
Combating Fake (and Unapproved) News
“The new regulations, among other things, prohibit the use of AI-generated content for spreading ‘fake news,’ or information deemed disruptive to the economy or national security—broadly defined categories that give authorities wide latitude to interpret. They also require providers of deep synthesis technologies, including companies, research organizations and individuals, to prominently label images, videos and text as synthetically generated or edited when they could be misconstrued as real.”
There may be some consumer protection in these laws, but it appears the key driver behind these laws is government protection. That may also emerge as an important motivation for western governments.
Politics, Porn, Fraud, and Rights
In China and the west, you will find similar arguments favoring stricter regulations on generative AI. Politicians are concerned that generative AI and deepfakes might impact elections and public policy. Elected officials and government employees also worry about the potential for subverting trust in the government. These sentiments are not unique to China.
A second argument is often advanced around protecting people from revenge porn. This is a real phenomenon, and there are many people that would like better laws to prosecute perpetrators. Granted, the regulations proposed are sure to cast a much wider net that goes well beyond protecting against this ghastly practice.
There is also the persistent concern about fraud from voice clones and deepfake videos. This has not become the epidemic that public and security officials predicted it would be just a few years ago, but it is a risk that people understand, and many would like to see some protections along these lines.
At some point, you may also hear an argument about protecting intellectual property and copyright rights. There are powerful media interests and celebrities that are particularly interested in this issue.
Is it [Real], or is it Memorex?
A legendary advertising campaign from the 1970s posed the question, “Is it live, or is it Memorex.” The audio tape recording company wanted people to associate the quality of the recording on its tapes as equivalent to the live experience. “Is it live, or is it recorded?” could easily be transposed into today’s deepfake and generative AI technologies as “Is it real, or is it AI?” Granted, in the digital age, what is “real” anyway?
Deepfake technologies typically exhibit some artifacts that enable people to discern them as different from reality. However, some people, particularly those that don’t know what to look for, cannot. And the technology is improving regularly.
To make identifying AI-generated content easier, everyone from the New York Public School System to social platforms would like to see some watermarks and disclosures associated with these productions. I suspect these laws will be obsolete within a year or two of enactment (and maybe the day they are enacted), but this is likely to become law. There are few things governments like more than labeling requirements.
It is less clear what other restrictions will be forthcoming besides use limitations around the content types mentioned above. Hopefully, the regulation will be narrowly deployed so as not to prohibit technology development that has many very positive applications.