Fake War Videos Flood Social Media
In the hours and days following the outbreak of military operations between the United States, Israel, and Iran, social media platform X was flooded with AI-generated videos purporting to show combat footage. Fabricated clips depicting Iranian ballistic missiles striking sites in Israel, explosions near the Dome of the Rock in Jerusalem, and aerial bombardment of Iranian cities circulated widely, many of them shared by verified accounts eligible for financial payouts through X's creator revenue sharing program.
The proliferation of fake war footage represents a significant escalation in the information warfare challenges posed by generative AI. While misleading content has accompanied every modern conflict — from repurposed video game footage passed off as real combat to old war clips recycled as current events — the quality and volume of AI-generated war content has reached unprecedented levels, making rapid identification of fabricated material far more difficult for both platforms and ordinary users.
X's Policy Response
X's head of product, Nikita Bier, announced that the platform would revise its Creator Revenue Sharing policies in response to the flood of AI-generated conflict content. Under the new rules, users who post AI-generated videos of armed conflict without adding a disclosure label will be suspended from the revenue sharing program for 90 days. Subsequent violations will result in permanent removal from the monetization program.
The enforcement mechanism relies primarily on community policing. AI war footage will be flagged through Community Notes — X's crowdsourced fact-checking system — or through automated detection of metadata from generative AI tools. When either mechanism identifies unlabeled AI content depicting conflict, the posting account will lose its monetization eligibility.
Notably, the policy does not remove AI-generated war footage from the platform. Accounts that post fabricated combat videos without labels face only financial penalties — they can continue sharing the content to their followers. This approach reflects X's broader philosophical stance that content moderation should focus on labeling and context rather than removal, even when the content in question could contribute to panic, misinformation, or real-world harm during an active military conflict.
The Incentive Problem
X's creator revenue sharing program, which pays verified accounts based on engagement metrics, has created powerful financial incentives for producing viral content — including sensational and misleading material. During high-attention events like military conflicts, accounts that post dramatic footage early and frequently can generate substantial engagement regardless of whether the content is authentic.
The economics are straightforward. A fabricated video showing a spectacular missile strike takes minutes to generate using commercially available AI video tools. If it goes viral before being identified as fake, it can generate millions of impressions and significant revenue for the posting account. Even after being labeled as AI-generated through Community Notes, the video may have already achieved most of its engagement — and the creator may have already earned the associated revenue.
The 90-day monetization suspension creates a deterrent, but critics argue it is insufficient given the potential payoffs. An account that earns thousands of dollars from a viral fake war video and then faces a 90-day suspension has made a profitable trade. The permanent ban for repeat offenses provides a stronger deterrent but only affects habitual offenders, not the wave of opportunistic accounts that create fake content during discrete high-attention events.
AI Makes Old Problems Worse
Fake conflict footage predates artificial intelligence. For over a decade, footage from the military simulation video game Arma 3 was repeatedly shared as real combat footage at the outbreak of virtually every new conflict. The same three clips circulated so frequently that the game's developer publicly expressed frustration. Pakistan's government once shared Arma 3 footage in an official social media post that remains live on X to this day.
What AI has changed is the barrier to creation. Generating convincing fake combat footage previously required either access to real footage from another conflict or video editing skills sufficient to repurpose game footage convincingly. AI video generation tools have reduced the creation barrier to typing a text prompt. A user can generate footage of missile strikes on specific landmarks by describing the scene in plain language, and the resulting video may be convincing enough to fool casual viewers scrolling through their feeds.
Watermarking systems designed to identify AI-generated content have proven inadequate. Research has demonstrated that AI watermarks can be removed with freely available tools, and many AI video generation platforms either do not apply watermarks or apply them in ways that are easily stripped. X's reliance on metadata detection as one enforcement mechanism is therefore only effective against content creators who do not take basic steps to remove identifying information.
Platform Accountability
The episode raises broader questions about platform responsibility during conflicts. X's approach of allowing fake war footage to remain on the platform while demonetizing its creators has been criticized as insufficient by information integrity researchers who argue that the platform's recommendation algorithms actively amplify sensational content regardless of its authenticity.
When an AI-generated video of a missile strike goes viral, X's algorithm promotes it to more users because it generates high engagement — clicks, views, shares, and quote tweets. The algorithmic amplification occurs before Community Notes can evaluate and label the content, creating a window during which millions of users may see and share fabricated footage believing it to be real.
This dynamic is particularly dangerous during active military conflicts, when false information about strikes, casualties, and military movements can influence public opinion, political decisions, and even military responses. The consequences of fake war footage going viral extend far beyond the digital realm — they can shape the real-world trajectory of the conflict itself.
The Broader Information Crisis
X's policy response to AI war footage is a microcosm of a larger challenge facing all information platforms in the age of generative AI. The tools to create convincing fake content are becoming more accessible and more capable faster than the tools and systems designed to detect and label such content. This asymmetry means that defenders — platforms, fact-checkers, and users — are fighting a losing battle against an ever-growing volume of synthetic media.
No platform has yet demonstrated an effective solution to this challenge at scale. The most promising approaches combine automated detection with human review and user education, but each of these elements faces significant limitations when applied to the volume and speed of content generated during a major news event like a military conflict.
This article is based on reporting by 404 Media. Read the original article.


