The Army is testing how far autonomous cyber defense should go
The U.S. Army is moving quickly to examine a more aggressive role for AI in cyber defense after a recent wargame with private-sector technology leaders explored how future attacks could overwhelm human operators. The exercise, known as AI Table Top Exercise 2.0, brought together executives from 14 tech companies, Army officials, and U.S. Cyber Command around a stark scenario: a 2027 Indo-Pacific crisis escalating into a cyber war against American military networks.
The main conclusion was not that AI can solve cyber defense on its own. It was that human-speed defense may no longer be enough if adversaries are using adaptive, AI-enabled attack systems that can probe, exploit, and shift tactics faster than people can respond. That has led Army officials to talk more openly about agentic AI that can move from detection into action, and about building a policy structure for when those systems should be allowed greater autonomy.
From warning to response
Brandon Pugh, principal cyber advisor to Army Secretary Daniel Driscoll, framed the issue in terms of risk appetite. In peacetime, human oversight may remain the default. In wartime, especially during a wave of attacks, the Army may need a different threshold for allowing software agents to act. That is the logic behind what officials described as a potential “risk continuum” policy, an approach that would vary human involvement depending on circumstances.
That distinction is critical. The Department of Defense is already using AI to help detect intrusions on its networks. Detection, however, is only the first step. The harder question is whether AI systems should be empowered to take direct response actions on their own when a breach is underway.
Pugh said the Army is already strong at using AI for detection, but now needs to push toward agentic capabilities that can not only identify malicious behavior but respond to it. That could mean isolating systems, blocking connections, triggering countermeasures, or otherwise interrupting an attack before it spreads.
Why the Army thinks this is urgent
Lt. Gen. Christopher Eubank, who leads Army Cyber Command, described the challenge in blunt terms. In a world of agentic AI, he said, telling defenders to “patch faster” is unrealistic. If offensive systems are launching repeated attacks that adapt continuously to defensive changes, human teams alone may simply be too slow to keep up.
The exercise scenario was built around exactly that premise. Officials said the hypothetical adversary used AI to launch salvo after salvo of cyberattacks that adapted to the Army’s defensive posture faster than a human defender could respond. That kind of pressure is different from ordinary network defense. It turns cyber operations into a speed contest in which hesitation itself becomes a vulnerability.
Seen that way, the Army’s interest in greater autonomy is less about enthusiasm for automation and more about a practical response to time compression. If the attack loop is accelerating, the defense loop must accelerate too.
Industry’s role in shaping doctrine
One notable aspect of the exercise is that it was not run as a narrowly scripted technical simulation. Designed and orchestrated by the Strategic Competitive Studies Project, it used a seminar-style format in which executives from 14 tech firms offered recommendations and military participants interrogated those ideas. That format signals that the Army is not just shopping for products. It is trying to understand how commercial AI thinking should shape military cyber doctrine.
This is an important distinction. The core questions are not simply whether a tool works, but who is allowed to authorize it, under what conditions, with what safeguards, and with what tolerance for false positives or unintended consequences. Those are policy and command questions as much as they are engineering questions.
The Army appears to recognize that. The exercise did not produce definitive answers, and officials were candid about that. But it gave military leaders outside perspectives on how to think about autonomous defense in a conflict scenario where delay could be catastrophic.
The policy problem may be harder than the technical one
Building agentic cyber systems is difficult. Building trust in them may be harder. A defensive AI that acts too slowly is ineffective. One that acts too quickly or too broadly could disrupt friendly operations, take down legitimate traffic, or introduce new risks during a crisis.
That is why the emerging “risk continuum” concept may matter more than any specific product announcement. It suggests the Army is preparing for a future in which levels of autonomy are not fixed but conditional. A routine network environment might demand tight human control. A major wartime assault might justify much looser supervision if the alternative is being outpaced by machine-driven attacks.
Such a framework would not settle every ethical or operational question, but it would offer a way to connect technical capabilities to command authority and mission context. In practice, that may be what determines whether agentic defense is usable at scale.
What comes next
The Army’s next steps appear likely to include both tool development and policy design. Officials said they want to fast-track new AI capabilities while also working through the rules that would govern their use. That dual track is sensible because one without the other would fail. Technology without doctrine risks chaos. Doctrine without capable technology risks irrelevance.
The broader implication is that military cyber defense is entering a new phase. AI is no longer being treated only as an aid for analysts. It is being considered as an operational actor, one that may need to take actions at machine speed when human timing is no longer sufficient.
The Army has not yet decided how much leash to give those agents. But after this wargame, it is clearly preparing for a future in which not making that decision could be the greater risk.
This article is based on reporting by Breaking Defense. Read the original article.





