The Pentagon’s view of cyber AI is shifting
Two senior US defense technology officials said this week that the newest generation of cyber-capable artificial intelligence should not be understood only as a threat. Speaking at the SCSP AI+Expo in Washington, Assistant Secretary for Cyber Policy Katherine Sutton and Pentagon Chief Technology Officer Emil Michael argued that tools modeled on Anthropic’s unreleased Mythos system could also become powerful instruments for defense.
The remarks reflect a more pragmatic posture inside the Defense Department as anxiety grows around AI systems that can identify and exploit software weaknesses at unprecedented speed. Rather than framing that speed purely as a new source of danger, Pentagon officials are making the case that the same capability could be used to harden vulnerable systems faster than human teams can manage today.
Sutton said the current patching model, which often unfolds over days or weeks, is no longer adequate in an environment where AI can move far faster. In her telling, the key opportunity is not abstract. It is secure code. If advanced models can rapidly detect flawed software and repair it, the military and its contractors could start reducing risk at a pace that legacy processes have never matched.
From “human speed” to machine speed
The officials’ comments centered on a simple but consequential point: vulnerabilities already exist across a sprawling software base, and AI changes the tempo at which they can be found, fixed, and exploited. Michael said those flaws are not new. What changes now is the timeline. Systems like Mythos may let defenders discover bugs faster, but they may also let attackers weaponize those same bugs faster.
That dual-use reality is what makes the moment so consequential for national security. Michael described it as a period in which the country, not just the federal government, needs to harden digital infrastructure. The Defense Department depends on a patchwork of aging software systems and code bases that have accumulated technical debt over many years. In that environment, a model that can autonomously patch vulnerable code could do more than improve operations around the margins. It could accelerate work that officials suggest should have happened long ago.
The argument is not that cyber risk disappears when AI enters the process. It is that the baseline for acceptable response times is changing. If machine-speed exploitation becomes normal, then machine-speed remediation becomes necessary. That is a major shift for institutions built around slower acquisition cycles, lengthy certification processes, and fragmented software ownership.







