Fake War Videos Flood Social Media

In the hours and days following the outbreak of military operations between the United States, Israel, and Iran, social media platform X was flooded with AI-generated videos purporting to show combat footage. Fabricated clips depicting Iranian ballistic missiles striking sites in Israel, explosions near the Dome of the Rock in Jerusalem, and aerial bombardment of Iranian cities circulated widely, many of them shared by verified accounts eligible for financial payouts through X's creator revenue sharing program.

The proliferation of fake war footage represents a significant escalation in the information warfare challenges posed by generative AI. While misleading content has accompanied every modern conflict — from repurposed video game footage passed off as real combat to old war clips recycled as current events — the quality and volume of AI-generated war content has reached unprecedented levels, making rapid identification of fabricated material far more difficult for both platforms and ordinary users.

X's Policy Response

X's head of product, Nikita Bier, announced that the platform would revise its Creator Revenue Sharing policies in response to the flood of AI-generated conflict content. Under the new rules, users who post AI-generated videos of armed conflict without adding a disclosure label will be suspended from the revenue sharing program for 90 days. Subsequent violations will result in permanent removal from the monetization program.

The enforcement mechanism relies primarily on community policing. AI war footage will be flagged through Community Notes — X's crowdsourced fact-checking system — or through automated detection of metadata from generative AI tools. When either mechanism identifies unlabeled AI content depicting conflict, the posting account will lose its monetization eligibility.

Notably, the policy does not remove AI-generated war footage from the platform. Accounts that post fabricated combat videos without labels face only financial penalties — they can continue sharing the content to their followers. This approach reflects X's broader philosophical stance that content moderation should focus on labeling and context rather than removal, even when the content in question could contribute to panic, misinformation, or real-world harm during an active military conflict.