A Disinformation Deluge
As the military conflict between the United States and Iran intensifies, experts are warning that an unprecedented wave of disinformation is flooding social media platforms, combining AI-generated and AI-manipulated content with traditional propaganda techniques to shape public perception of the war. The scale and sophistication of the campaigns are testing the limits of platform moderation systems and researchers' ability to separate fact from fiction in real time.
Multiple independent research groups have identified coordinated networks of accounts spreading fabricated or misleading content about the conflict across major social media platforms including X (formerly Twitter), TikTok, Telegram, and YouTube. The content ranges from crudely manipulated images to sophisticated AI-generated video that is difficult to distinguish from genuine footage without specialized analysis tools.
Types of Disinformation
Researchers have categorized the disinformation into several distinct types. The most prevalent involves misattributed footage, where real video from other conflicts or events is presented as depicting current operations in Iran. This technique exploits the fact that combat footage from different wars can look similar to untrained viewers, and the emotional impact of genuine violence makes it highly shareable regardless of its actual origin.
AI-generated content represents a newer and more concerning category. Generative AI tools can now produce realistic-looking video of military equipment, explosions, and urban destruction that never actually occurred. Several viral clips purporting to show specific strikes or casualties have been identified as wholly synthetic, created using commercially available AI video generation tools.
A third category involves authentic footage that has been selectively edited or presented with misleading context. This includes genuine military footage that is captioned with false descriptions of what it shows, real casualty images presented as being from different locations or events, and clips edited to remove context that would change their interpretation.
State and Non-State Actors
The disinformation campaigns appear to originate from multiple sources with different objectives. Iranian state media and affiliated accounts have been amplifying content that emphasizes civilian casualties from US and Israeli strikes, sometimes using unverified or fabricated imagery to intensify the narrative. These campaigns aim to build domestic support for continued resistance and generate international sympathy and opposition to the military campaign.
Conversely, pro-war accounts have circulated exaggerated claims of military success, downplayed civilian impact, and promoted narratives designed to maintain public support for continued operations. Some of this content has been traced to coordinated influence networks rather than organic user activity, though attribution remains challenging.
Non-state actors, including ideologically motivated groups, commercial disinformation operators, and individuals seeking viral engagement for profit, contribute additional layers of false and misleading content. The financial incentives of social media engagement mean that emotionally charged war content generates significant advertising revenue, creating economic motivation for producing disinformation regardless of political allegiance.
Platform Responses
Social media platforms have acknowledged the challenge but their responses have been uneven. X has been criticized for reducing its trust and safety capacity in recent years, making it more vulnerable to coordinated manipulation campaigns. TikTok's algorithmic amplification of emotionally engaging content has made it a particularly effective vector for disinformation, as manipulated war footage can reach millions of views within hours of being posted.
YouTube and Meta's platforms have implemented more aggressive labeling and removal policies for conflict-related misinformation, but the volume of content being generated overwhelms automated detection systems. Human review teams, which are essential for evaluating the context and veracity of conflict footage, cannot keep pace with the rate at which new content appears.
The challenge is compounded by the legitimate difficulty of verifying claims in an active war zone where access is limited and official communications from all sides are unreliable. Even well-intentioned fact-checkers struggle to definitively confirm or deny specific claims when primary sources are unavailable or contradictory.
The AI Escalation
The use of AI-generated content in wartime disinformation represents a qualitative escalation from previous conflicts. During the early stages of the Russia-Ukraine war, most disinformation involved misattributed real footage; generative AI tools were not yet capable of producing convincing military content. The current Iran conflict is the first major war in which AI-generated video is being deployed at scale as a disinformation weapon.
This development has prompted urgent calls from researchers and policymakers for improved AI-generated content detection tools and mandatory disclosure requirements for synthetic media. Several research groups have released open-source detection tools, but the technology is evolving faster than detection capabilities, creating an asymmetry that favors disinformation producers.
For ordinary citizens trying to stay informed about the conflict, the disinformation environment creates a paradox: the more they seek information, the more likely they are to encounter false or manipulated content. Media literacy experts recommend relying on established news organizations with reporters in the region, cross-referencing claims across multiple independent sources, and maintaining healthy skepticism toward emotionally compelling footage that appears without clear provenance.
This article is based on reporting by Mashable. Read the original article.




