War as Content
As U.S. and Israeli forces continue strikes against Iran, a new phenomenon has emerged on the internet: dozens of AI-powered intelligence dashboards that allow anyone with a web browser to track the conflict in near real-time. Built using open-source data, satellite imagery, ship tracking, and AI-driven analysis, these platforms are being marketed as superior alternatives to traditional news coverage.
One of the most prominent dashboards was built by two members of the venture capital firm Andreessen Horowitz. It combines live data feeds with a chat function, news aggregation, and links to prediction markets where users can bet on outcomes like the identity of Iran's next supreme leader. When Mojtaba Khamenei was recently selected as Iran's new supreme leader, some bettors collected payouts.
The dashboard inspired a post on X that captured the phenomenon's spirit: "Anyone wanna host a get together in SF and pull this up on a 100 inch TV?" The comment encapsulates a troubling dynamic — the transformation of an active military conflict into a form of entertainment and social engagement.
Vibe-Coded Intelligence
A review of more than a dozen such dashboards reveals that many were "vibe-coded" in a matter of days using AI development tools. Some were built before the Iran conflict began, originally designed for monitoring other geopolitical situations, but nearly all have been repurposed and advertised as tools for getting closer to the truth of what is happening on the ground.
The creators position their platforms against traditional media, which they characterize as slow, biased, and filtered. "Just learned more in 30 seconds watching this map than reading or watching any major news network," one commenter wrote on LinkedIn, a sentiment echoed across social media.
One dashboard attracted the attention of a founder of Palantir, the intelligence company through which the U.S. military is reportedly accessing AI models like Anthropic's Claude during the conflict. The overlap between Silicon Valley's intelligence products and the hobbyist OSINT community is becoming increasingly blurred.
The Reliability Problem
Despite their sophisticated appearance, many of these dashboards aggregate data without rigorous verification. Open-source intelligence, or OSINT, has become a powerful tool for monitoring conflicts when practiced by trained analysts who understand how to verify and contextualize raw data. But the democratization of OSINT tools through AI has lowered the barrier to entry to the point where anyone can create a professional-looking intelligence platform regardless of their analytical expertise.
The risk is that users mistake visual sophistication for analytical rigor. A dashboard that combines satellite imagery, flight tracking data, and news feeds into an appealing interface may look authoritative while presenting unverified, decontextualized, or misleading information. In a conflict involving active information operations by multiple state actors, the potential for manipulation is significant.
Professional intelligence analysts have raised concerns that the proliferation of amateur OSINT dashboards could actually degrade public understanding of the conflict by flooding the information space with unverified claims presented with false authority.
Prediction Markets and Gamification
The integration of prediction markets into conflict-tracking dashboards represents a particularly uncomfortable intersection of finance and warfare. Users can bet on military outcomes, leadership changes, and casualty estimates, turning the human cost of armed conflict into a financial instrument.
Proponents argue that prediction markets aggregate information more efficiently than traditional analysis and that they provide genuine predictive value. Critics counter that betting on war outcomes gamifies human suffering and creates financial incentives to spread misinformation that could move markets.
The ethical questions extend beyond prediction markets to the broader phenomenon of conflict-as-content. When tracking a military campaign becomes a social activity — something to watch on a big screen with friends — the psychological distance between viewer and victim may widen to the point where the human stakes of the conflict become abstract.
The AI Amplification Effect
AI is central to both the creation and consumption of these dashboards. AI coding tools make it possible to build sophisticated-looking platforms in days rather than months. AI analysis tools process and summarize vast quantities of open-source data. And AI-generated content fills the commentary and analysis layers that give these platforms their sense of authority.
This AI amplification effect means that the volume of conflict-related intelligence content has exploded far beyond what human analysts or editors could produce or verify. The result is an information environment where signal and noise are increasingly difficult to distinguish, even for experienced observers.
The phenomenon also raises questions about the responsibility of AI companies whose tools are being used to build these platforms. As AI makes it easier to create products that look like intelligence tools but lack the rigor of actual intelligence analysis, the line between informing the public and misleading them becomes dangerously thin.
This article is based on reporting by MIT Technology Review. Read the original article.




