TikTok's New Rules on Deepfakes and Other Synthetic Media
Industry policies range from nothing to hyper-detailed
TikTok first established guidelines about the use of synthetic media and deepfakes in August 2020. The guidelines were last updated a year ago, adding three words, “significant” and “other persons.” For about two-and-a-half years, 27 or 30 words have sufficed to cover TikTok’s approach to AI-generated media. Times, and technology, have changed.
There is a new notice in TikTok’s Community Guidelines that says the rules are changing next month. The biggest change is the new “Synthetic and Manipulated Media” section. Earlier references were included as a single bullet point in the Harmful Misinformation section. The update includes 384 words about what you can and cannot do with synthetic media on TikTok.
TikTok
From the Harmful Misinformation Section [Old]
August 2020
Digital Forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events and cause harm to the subject of the video or society
March 2022
Digital Forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events and cause significant harm to the subject of the video, other persons, or society
Proposed updates for April 2023
Synthetic and Manipulated Media [New]
We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock. However, AI can make it more difficult to distinguish between fact and fiction, carrying both societal and individual risks. Synthetic or manipulated media that shows realistic scenes must be clearly disclosed. This can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’.
We balance the expressive value that synthetic media has against the risk of harms to individuals. We do not allow synthetic media that contains the likeness of any real private figure. While we provide more latitude for public figures, we do not want them to be the subject of abuse, or for people to be misled about political or financial issues. We do not allow synthetic media of public figures if the content is used for endorsements or violates any other policy. This includes prohibitions on hate speech, sexual exploitation, and serious forms of harassment (learn more about public figures in Harassment and Bullying).
More information
Synthetic media is content created or modified by AI technology. It includes highly realistic digitally-created (fake) content of real people, such as a video of a real person speaking but their words have been modified or changed.
Realistic scenes include showing fake people, places or events that look like they are real.
Public figures are adults (18 years and older) with a significant public role, such as a government official, politician, business leader, and celebrity. We do not identify people under the age of 18 as public figures.
NOT allowed
Synthetic media showing realistic scenes that are not prominently disclosed or labeled in the video
Synthetic media that contains the likeness (visual or audio) of a real person, including: (1) a young person, (2) an adult private figure, and (3) an adult public figure when used for political or commercial endorsements, or if it violates any other policy
Material that has been edited, spliced, or combined (such as video and audio) in a way that may mislead a person about real-world events
Allowed
Synthetic media showing a public figure in certain contexts, including artistic and educational content, such as a video showing a celebrity doing a popular TikTok dance, and a historical figure featured in a history lesson
Labeling and Limits
You can see in the new wording that TikTok knows AI-generated and manipulated media is popular on the platform and will be a significant source of creative production by users in the future. The new policy boils down to four rules (I mean guidelines):
You must label synthetic media and deepfakes with a badge in the video or a reference in the caption.
You cannot publish a deepfake of a private individual (i.e., not a public figure) or anyone under 18.
You cannot publish a deepfake of a public figure if it implies an endorsement of a political position or product or if it violates another policy. You can use the technology to render a public figure if TikTok thinks it is fun or educational and doesn’t violate other policies.
You cannot combine two or more pieces of media in a way that may mislead users.
You may notice TikTok has left itself latitude with several subjective terms that it can interpret on a case-by-case basis, but it also offers several points of clarity. You may also notice that the new guidelines remove the requirement of “significant harm” to an individual or society.
This is an area of expanded latitude for TikTok when determining how to adjudicate matters. You do not need to identify that “harm” or “significant harm” resulted from the synthetic media post.
Why is TikTok Making Changes?
Shou Chew, TikTok’s CEO, testified before Congress today. It is fair to say he received a chilly reception from most of the elected representatives. A key concern politicians express is whether TikTok’s origin in China creates unique privacy risks for U.S. citizens. Chew commented (as reported by CNN):
TikTok itself is not available in mainland China, we’re headquarterd in Los Angeles and Singapore, and we have 7,000 employees in the U.S. today … Still, we have heard important concerns about the potential for unwanted foreign access to US data and potential manipulation of the TikTok US ecosystem. Our approach has never been to dismiss or trivialize any of these concerns. We have addressed them with real action.
Chew took to TikTok this week to generate user support about what the company is doing and what’s at stake. He clearly hopes that the popularity of the platform will make it hard to ban the service outright or impose overly strict operating constraints.
Some politicians have talked about banning TikTok. Now this could take TikTok away from all 150 million of you. I’ll be testifying before Congress later this week to share all that we’re doing to protect Americans using the app and deliver on our mission to inspire creativity and to bring joy.
Another concern is that TikTok is a channel for disinformation delivered through manipulated media. The timing of the proposed Community Guidelines changes is surely not a coincidence. Chew can point to them in response to questions at the hearing and from the news media afterward.
While the new Community Guidelines appear designed to protect individuals depicted in videos from harm, they are also designed to protect users from being misled. The language spells out information about “real-world events,” which aligns closely with one of the U.S. policymakers’ concerns about deepfakes that have spread on YouTube and TikTok that appear to be from world leaders.
The most famous of these problematic deepfakes is a video of Ukrainian President Volodymyr Zelensky telling soldiers to surrender to Russia. The idea here is that harm can arise in many forms ranging from embarrassment and reputation damage to influencing international conflicts and government policy.
Just yesterday, social media was awash in deepfake images of Donald Trump being arrested in New York City. Some users mentioned that the content should be labeled.
Impact on the Industry
I don’t expect other social media companies to copy TikTok’s new synthetic media policies as quickly as they copy their features. However, it is notable that TikTok is taking these actions. You might see updates from other social media companies before their executives next appear before Congress.
Instagram does not reference the technologies, and YouTube does so only indirectly. Twitter appears to be in the minority, having thought deeply about this topic and decided to err on the side of detail and transparency.
YouTube does not specifically reference synthetic media or deepfake technology. However, it does include an expansive definition in its misinformation policies.
YouTube
Certain types of misleading or deceptive content with serious risk of egregious harm are not allowed on YouTube. This includes certain types of misinformation that can cause real-world harm, like promoting harmful remedies or treatments, certain types of technically manipulated content, or content interfering with democratic processes.
The standard here is the “serious risk of egregious harm.” There is no elucidation about what that entails or if AI-manipulated media is included. Note that TikTok removed a similar standard offering them additional latitude in enforcement.
Twitter is probably the gold standard for synthetic media policies. It has a 1,400-word section about its published policies for “synthetic and manipulated media.” Here is the introduction:
In order for content with misleading media (including images, videos, audios, gifs, and URLs hosting relevant content) to be labeled or removed under this policy, it must:
Include media that is significantly and deceptively altered, manipulated, or fabricated, or
Include media that is shared in a deceptive manner or with false context, and
Include media likely to result in widespread confusion on public issues, impact public safety, or cause serious harm
The interesting contrast between Twitter and TikTok is that Twitter talks more about how it will decide if the content is in violation, while TikTok’s policy focuses on the limited specific instances that are not allowed. As synthetic media becomes more common in 2023, alongside the rise of easy-to-use generative AI, many companies are sure to clarify their policies. Ambiguous guidelines will create more confusion and headaches than the content moderation teams will want to adjudicate on an ad hoc basis.
This may also spill over into the corporate use of chatbots powered by large language models and virtual humans that engage with customers. I spoke with a group last week called AI Circle, and one member said he thought disclosing whether a bot or avatar was synthetic should be the norm. So, what is your synthetic media policy?