The Unanswered Question
The public confrontation between Anthropic and the Department of Defense has forced into the open a question that U.S. law has never clearly answered: Is the Pentagon allowed to conduct mass surveillance on Americans using artificial intelligence? More than a decade after Edward Snowden exposed the NSA's bulk metadata collection, the legal framework governing domestic surveillance by the military remains riddled with gaps.
The flashpoint was the Pentagon's desire to use Anthropic's Claude AI to analyze bulk commercial data on Americans — the kind of information that data brokers sell openly, including location data, purchase histories, browsing patterns, and social media activity. Anthropic refused, demanding that its AI not be used for mass domestic surveillance or autonomous weapons. A week after negotiations collapsed, the Pentagon designated Anthropic a "supply chain risk," a classification typically reserved for foreign companies that threaten national security.
What the Law Actually Says
The legal landscape governing military surveillance of Americans is a patchwork of statutes, executive orders, and court interpretations that were largely written before AI existed. The Posse Comitatus Act of 1878 prohibits the use of federal military forces for domestic law enforcement, but its application to intelligence gathering rather than enforcement actions is contested.
Executive Order 12333, signed by Ronald Reagan in 1981 and still in effect, authorizes intelligence agencies to collect "publicly available information" and information obtained from "cooperating sources." Commercial data purchased from data brokers arguably falls into both categories, creating a legal pathway for bulk data acquisition that many civil liberties advocates find alarming.
The Foreign Intelligence Surveillance Act establishes judicial oversight for certain types of surveillance but contains exceptions for information that is voluntarily provided or commercially available. When the Pentagon buys Americans' data from commercial sources rather than intercepting their communications, FISA's protections may not apply.
The AI Difference
What makes the current situation qualitatively different from previous surveillance debates is AI's ability to analyze bulk data at scale and extract insights that would be impossible for human analysts. A database of location records that might take thousands of analysts years to process can be analyzed by an AI system in hours, identifying patterns of behavior, social connections, and potential vulnerabilities across millions of individuals.
This capability transforms the nature of the surveillance question. When bulk data collection was primarily about storage — amassing records that might be searched later for specific purposes — the privacy implications, while significant, were bounded by the practical limits of human analysis. AI removes those limits, making it possible to conduct comprehensive behavioral analysis of entire populations from commercially available data.
The result is a form of surveillance that is technically legal under many interpretations of existing law but that would have been recognizable to the framers of those laws as exactly the kind of mass domestic monitoring they intended to prevent.
The OpenAI Contrast
While Anthropic drew its line, rival OpenAI signed a deal allowing the Pentagon to use its AI for "all lawful purposes" — language that critics say is deliberately broad enough to encompass domestic surveillance. CEO Sam Altman defended the agreement as supporting legitimate national security needs, but the backlash was fierce: ChatGPT uninstalls surged nearly 300 percent, and employees at both OpenAI and Google publicly supported Anthropic's position.
The divergent approaches of the two leading AI companies have created an uncomfortable dynamic. If one company's AI is available for surveillance and another's is not, the practical effect of Anthropic's ethical stance may be limited — the Pentagon simply uses the willing partner's technology instead. This raises the question of whether individual corporate ethics can effectively constrain government surveillance or whether only legislation can provide meaningful protections.
Congressional Response
The controversy has prompted legislative action, though its prospects are uncertain. A bipartisan group of senators has introduced the AI Surveillance Accountability Act, which would require judicial authorization before federal agencies could use AI to analyze bulk data on Americans purchased from commercial sources. The bill would close what sponsors call the "data broker loophole" that allows the government to circumvent surveillance protections by buying rather than intercepting personal information.
Defense hawks have opposed the legislation, arguing that restricting AI analysis of commercially available data would handicap counterterrorism and counterintelligence operations. They note that the data is already legally available for purchase by anyone and that preventing the government from analyzing it would create an asymmetric disadvantage against adversaries who face no such restrictions.
The Broader Stakes
The debate extends beyond the specific Anthropic-Pentagon dispute to fundamental questions about privacy in the age of AI. If the government can purchase commercial data on Americans and use AI to extract detailed behavioral profiles, the distinction between a surveillance state and a society with privacy protections becomes largely theoretical.
Civil liberties organizations argue that existing law was designed for an era when the practical difficulty of mass analysis served as a de facto privacy protection. Now that AI has eliminated that practical barrier, they say, the law must be updated to explicitly prohibit what technology has made newly possible.
The outcome of this debate will likely shape the relationship between AI companies, the military, and civil liberties for decades to come. Whether the resolution comes through legislation, court decisions, or corporate policies remains an open question — but the days of ambiguity appear to be numbered.
This article is based on reporting by MIT Technology Review. Read the original article.


