The spread of AI-generated videos impersonating American academic John Mearsheimer has highlighted how easily deepfakes circulate online and how difficult it remains for individuals to remove fabricated content from major digital platforms.
Artificial intelligence–generated deepfake videos falsely portraying prominent American scholar John Mearsheimer have surfaced widely on YouTube, underscoring the growing challenge of tackling digital impersonation and misinformation in the age of generative AI.
The international relations expert, based at the University of Chicago, spent several months attempting to have hundreds of fabricated videos taken down from the Google-owned platform. The clips, created using AI tools, appeared to show him offering controversial opinions on sensitive geopolitical issues, including US-China relations and regional tensions in Asia.
According to Mearsheimer’s office, at least 43 YouTube channels were identified publishing videos that misused his image and voice. Some of the content was aimed at international audiences, including Mandarin-language videos, giving the false impression that the academic was addressing Chinese viewers directly.
“These videos are entirely fake, yet designed to look real,” Mearsheimer said, warning that such content threatens open discourse and public trust. He added that viewers could easily mistake the clips for genuine commentary, given their realistic presentation.
Slow takedowns and rapid reappearance
A key challenge, Mearsheimer said, was YouTube’s reporting system, which requires individual videos to be flagged rather than allowing entire channels to be reported unless specific naming criteria are met. This forced his team to submit takedown requests one by one—a process that required significant time and resources.
Despite repeated requests, new channels continued to appear, sometimes using sligh t spelling variations of his name to avoid detection. While YouTube eventually removed 41 of the identified channels, many videos had already gained traction by the time action was taken.
Experts say this pattern reflects a broader problem. As AI tools become cheaper and more accessible, impersonation can scale rapidly, shifting the burden onto victims to prove content is fake rather than on platforms to prevent its spread.
YouTube has said it enforces its policies consistently and is investing in systems to reduce the circulation of low-quality and misleading AI-generated content. The company has also signalled plans to expand AI tools for creators while improving safeguards.
A growing problem for public figures
Mearsheimer’s experience mirrors similar cases involving doctors, business leaders and academics whose likenesses have been misused to promote false narratives or fraudulent schemes. To counter the issue, he plans to launch an official YouTube channel to help audiences identify authentic content.
Other academics, including US economist Jeffrey Sachs, have taken similar steps, warning that deepfake impersonation has become a persistent and evolving online threat.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



