YouTube said it is expanding its program designed to detect AI-generated content that uses the likeness of public figures. Under the updated rules, politicians, government officials and journalists will be able to report videos that misuse their image through realistic deepfakes.
The program allows public figures to submit complaints about videos that use AI-generated versions of their appearance. The platform then reviews the content and may decide to remove it if it determines that the material misleads viewers or violates policies related to synthetic media.
YouTube began developing these tools in 2024, initially focusing on protecting music creators. Artists were able to report content that imitated their voices using generative artificial intelligence. The system is now being expanded to additional professions that are particularly vulnerable to deepfake manipulation.
The company notes that advances in generative AI increase the risk of highly realistic videos that can impersonate real people. In the case of politicians and government officials, such content could influence public debate or electoral processes, while in the case of journalists it could undermine media credibility.
The platform emphasizes that each case will be evaluated individually. Decisions about removing content will consider factors such as the context of the publication, the likelihood of misleading viewers and whether the material is clearly presented as parody or satire.
The expansion of the program reflects broader efforts by major internet platforms to limit the risks associated with AI-generated content. The deepfake problem is particularly significant in politics and media, where realistic fake recordings can spread rapidly online.

