IE 11 is not supported. For an optimal experience visit our site on another browser.

YouTube says it will require creators to label 'realistic' AI content

YouTube says it will now prompt users to say whether their videos contain altered or synthetic content that appears to be real.
A person holds a mobile phone with the YouTube logo on the screen.
YouTube is instituting new labels for AI content.Didem Mente / Anadolu via Getty Images

YouTube announced it will now require users to indicate whether the videos they upload depict altered or synthetic media, including artificial intelligence-generated content. For videos related to “sensitive topics” like “health, news, elections, or finance,” YouTube said it will put a label on the video itself.

On Monday, the platform explained it will ask users uploading new videos to answer "Yes" or "No" to whether or not their videos contain altered content. Specifically, the platform will ask if any of the following describes their content: "Makes a real person appear to say or do something they didn't say or do," "Alters footage of a real event or place," or "Generates a realistic-looking scene that didn't actually occur." When a user answers "Yes," YouTube will put a label in the video description that says "Altered or synthetic content."

YouTube's announcement said the new feature will be available Monday, but when NBC News tried to upload a new YouTube video following YouTube's announcement, the feature did not yet appear.

The announcement comes as tech companies are scrambling to address the spiraling issue of AI-generated misinformation online.

YouTube, which is owned by Google, already hosts numerous unlabeled videos that are either entirely AI-generated or include AI-generated elements. A January NBC News investigation found hundreds of videos uploaded since 2022 that spread fake news about Black celebrities using AI tools. For example, many of the videos featured AI-generated audio narration, which can be produced far more quickly and cheaper than a human actor reading a script. Other videos used thumbnails that contained AI-edited photos, such as photos of celebrities' faces that were edited to make them look angry or sad.

Not all of the examples NBC News previously found on YouTube would be labeled as synthetic content under YouTube's new rules. For example, using AI text-to-speech technology to create voice-overs would not inherently require a label, unless the resulting video intended to deceive viewers with a "realistic" but fake voice that was imitating a real person's voice.

"We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance," YouTube said in its announcement Monday. "We won’t require creators to disclose if generative AI was used for productivity, like generating scripts, content ideas, or automatic captions."

YouTube said it will first introduce altered and synthetic content labels on its mobile app, followed by YouTube's desktop browser and YouTube TV "in the weeks ahead." In the future, although the timing is unspecified, YouTube said it will penalize users "who consistently choose not to disclose this information." YouTube said it may also add the label itself in cases where the resulting unlabeled content may "confuse or mislead people."

While YouTube has been unable to contain the wave of AI-generated content intended to mislead viewers that already exists on its platform, its parent company, Google, has continued ahead in releasing consumer AI products like AI image generator Gemini. Google's Gemini came under fire for generating misleading historical images that depicted non-white people in scenes where they shouldn't be — like in Nazi uniforms or in the U.S. Congress in the 1800s. In response, Google temporarily limited Gemini's ability to create images of people.