The debate is shifting from approval points to design architecture

As military organizations invest more heavily in autonomous and increasingly agentic systems, the question of human control is becoming less about slogans and more about system design. A new proposal described by Breaking Defense argues that the common formulation of “human in the loop” is not enough if the machine has already shaped the battlefield picture, narrowed the available options, and constrained the human’s choices before a final approval request ever appears.

The authors call their idea “Synthesized Command & Control.” Their central claim is that meaningful human control should be embedded much earlier in the decision process. Rather than asking a person to approve or veto a strike at the end of an AI-enabled chain, they argue, commander preferences and operational intent should be systematically built into the software from the start.

Why final approval may be too late

The article’s critique of the standard model is straightforward. If an AI system is allowed to make upstream choices about force positioning, target prioritization, or recommendation framing at machine speed, then the human’s role at the end may be narrower than it appears. A commander might still technically authorize the action, but only after the automated system has already shaped the conditions under which the decision is made.

That is the core tension in human-machine teaming. Require human approval at every step, and the speed advantages of AI may disappear. Require it only at the end, and the human may retain formal authority while losing practical influence over the larger logic of the operation.

The proposal: encode intent, not just permission

The proposed answer is to encode human preferences preemptively. In this model, ideas such as commander’s intent would be translated into constraints and guidance inside the algorithm itself. The goal is not merely to create a checkpoint where a human can stop the machine, but to ensure the machine’s option-generation process is bounded from the outset by human judgment.

That approach reflects a broader shift in AI governance thinking. The question is not only whether a system can be interrupted. It is whether the system’s reasoning space is aligned early enough that its speed and scale remain compatible with human authority.

Clarity is still missing across military AI categories

The article also points to a conceptual problem inside the current defense AI landscape: persistent ambiguity around terms such as automatic, semi-autonomous, autonomous, and agentic autonomous. That lack of definitional clarity complicates procurement, oversight, and doctrine. If different actors mean different things by the same labels, then debates about control, accountability, and acceptable use can become confused before technical questions are even addressed.

This matters in a context where the stakes and investments are large. The article references an almost $55 billion funding request for the Defense Autonomous Warfare Group and notes high-level attention from senior US defense leadership. With spending and political focus rising, vagueness about operational categories becomes more than a semantic issue. It becomes a governance risk.

A framework, not a finished solution

The proposal does not solve every problem. Translating human intent into code is difficult, and command intent can itself be ambiguous, contested, or subject to changing battlefield conditions. Still, the argument identifies a real weakness in simplistic “human approval” narratives. A late-stage approval button may satisfy a formal requirement while failing to preserve substantive control.

That is why this debate matters beyond military circles. It addresses a broader AI question that appears in many domains: at what point in a system’s workflow do human values actually shape outcomes? The authors’ answer is that in high-speed conflict environments, waiting until the end is not enough. If military AI is going to remain bounded by human judgment, that judgment has to be designed into the system before the system starts acting.

This article is based on reporting by Breaking Defense. Read the original article.