A Deal With Far-Reaching Implications

It has been just over two weeks since OpenAI reached a landmark agreement to allow the US Department of Defense to use its AI systems in classified military environments. The deal has generated both significant attention and significant concern among AI researchers, arms control experts, and civil liberties advocates. With US forces actively engaged in military operations against Iran — including strikes on Kharg Island — the timing makes the implications of the agreement more immediate than most anticipated.

The basic parameters of the deal are known: OpenAI will permit military use of its models in classified settings, with Sam Altman publicly stating that the agreement does not allow the military to build autonomous weapons using the company's technology, and further claiming the arrangement prevents domestic surveillance applications. But examination of the agreement's actual terms reveals that both restrictions are primarily enforced through the Pentagon's own guidelines on autonomous weapons systems — guidelines that are themselves quite permissive by international standards.

What the Pentagon's Autonomous Weapons Guidelines Actually Say

The Department of Defense's directive on Autonomous Weapons Systems requires that lethal autonomous weapons maintain appropriate levels of human judgment over kill decisions. Critics of this language have consistently noted that it does not specify what appropriate means, does not require a human to approve each individual targeting decision, and explicitly allows for semi-autonomous systems where a human approves target categories or strike parameters in advance without approving each specific engagement.

This creates a significant gap between OpenAI's public framing — we will not let the military build autonomous weapons — and what the actual agreement permits. Under the Pentagon's own standards, an AI system that processes sensor data, identifies targets, and executes strikes without per-shot human authorization could still qualify as compliant if a human approved the targeting rules in advance. Whether OpenAI's models could be integrated into such a system without violating the agreement is not publicly clear.

Iran as the Test Case

The ongoing conflict with Iran makes these questions concrete rather than hypothetical. US Central Command is conducting strike operations in the Persian Gulf region, coordinating responses to Iranian naval asymmetric threats, and managing intelligence collection across a complex battlespace. Each of these activities involves categories of AI application where OpenAI's models could potentially be integrated.

On the intelligence side, large language models have proven capable of rapidly synthesizing information from multiple sources, translating foreign-language communications, and identifying patterns in structured data — all tasks relevant to military intelligence analysis. These applications are widely understood to be among the primary military uses of commercial AI and pose fewer ethical concerns than autonomous targeting.

More concerning are potential applications in target identification or battle damage assessment, where AI systems could process imagery or signals intelligence to identify military assets, track movements, or evaluate strike results. These are areas where the distinction between a decision support tool and an autonomous system becomes blurry — and where the specific language of OpenAI's agreement would determine whether a given application is permitted.

The Surveillance Restriction and Its Limits

Altman's claim that the agreement prevents use of OpenAI's technology for domestic surveillance is more complicated than it sounds. The restriction as described applies to domestic surveillance — collection and analysis of data on US citizens — but does not address foreign intelligence collection. In the context of military operations, surveillance of adversary military communications, tracking of vessel movements, and monitoring of Iranian regime communications would all qualify as foreign intelligence collection and would not be covered by a domestic surveillance restriction.

Civil liberties advocates who focus on domestic surveillance concerns may not prioritize the foreign intelligence applications that military commanders find most valuable. But conflating the two in public communications suggests OpenAI's public framing may not fully capture the range of military applications the deal enables.

What OpenAI's Motivations Appear to Be

OpenAI is not the first major technology company to navigate the tension between commercial AI development and military application. Google faced significant internal opposition over its Project Maven contract in 2018, ultimately declining to renew it. Microsoft and Amazon have continued to expand defense contracts despite some internal dissent.

OpenAI's decision to enter the classified military market suggests a strategic calculation that competing for government AI contracts is important for the company's long-term position, particularly as it competes with Anthropic, Google DeepMind, and defense-focused AI companies like Scale AI and Palantir. The Pentagon's AI budget is growing rapidly, and the company that establishes itself as the trusted government AI partner early may secure significant long-term advantages.

This article is based on reporting by MIT Technology Review. Read the original article.