When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

Cogs of War
Cogs of War

Military Operational Thinking in an Age of Artificial Intelligence

March 18, 2026
Military Operational Thinking in an Age of Artificial Intelligence
Cogs of War

Cogs of War

Military Operational Thinking in an Age of Artificial Intelligence

Military Operational Thinking in an Age of Artificial Intelligence

Anders McD Sookermany and Thomas Slensvik
March 18, 2026

In recent years, as AI has begun to enter military planning and operational design, a persistent unease has surfaced among practitioners. Even with improved tools, increased tempo, and unprecedented access to data, plans continue to falter on integration, coherence, and a shared sense of direction. Marco Lyons’ recent War on the Rocks article on the perceived decline of operational art gives voice to this unease in a way that is both timely and important.

We do not know enough about the specific wargame, its constraints, or its internal dynamics to adjudicate these conclusions directly. What Lyons’ account nevertheless captures with clarity is a set of recurring difficulties that many practitioners recognize: fragmented campaigns, sequential decision-making, and a widening gap between planning activity and operational coherence.

Drawing on our experience teaching operational art and experimenting with planning, we share this concern. Yet Lyons’ observations may also point to something deeper: a tension between different ways of thinking about operations.

As AI increasingly shapes what planners see, what they treat as relevant, and how coherence is represented, this tension between different ways of making sense of operations becomes more consequential. Our aim in this piece is therefore not to dismiss Lyons’ argument, but to broaden the discussion. Specifically, we seek to clarify how different ways of reasoning about operations coexist in planning practice, and how the growing use of AI may privilege some ways of thinking about operations while obscuring others.

When Operational Art is Treated as a Single Language

Lyons’ analysis is rooted in a well-established understanding of operational art — one that has shaped Western planning practice for decades. Within this tradition, operational art functions as a coherent professional language: a means of linking strategic aims to tactical actions through concepts such as centers of gravity, decisive points, lines of operation, and the integration and synchronization of effects. This language has provided generations of planners with shared reference points, an analytical structure, and a way to coordinate complex action under pressure.

What often remains implicit, however, is the assumption that operational art constitutes a single, internally coherent practice — one that can be mastered, applied, and evaluated against common standards. In this view, breakdowns in operational performance are naturally interpreted as failures of application: insufficient rigor, weak conceptual understanding, or inadequate training. Lyons’ diagnosis largely follows this logic. When planners struggle to meaningfully integrate operations, misidentify centers of gravity, or reduce campaigns to sequential decision points, the conclusion is that operational art itself has eroded and must be restored through better education, more demanding wargaming, and improved tools.

But another interpretation is also possible. Many of the difficulties Lyons identifies can emerge not only when a shared operational language is forgotten or misapplied, but when one particular understanding of operational art is stretched across environments that are increasingly complex and adaptive. In such contexts, planners may follow doctrinally sound procedures and still experience a growing sense that the situation does not quite add up — that reality doesn’t fit the procedure and vice versa.

One way this tension manifests in practice — and one we have repeatedly encountered in teaching and experimentation — is what might be called a checklist effect. As planning processes become increasingly structured around templates, procedural steps, and standardized products, attention can shift from exploring the problem to completing the process. Plans are completed, briefed, and approved — yet many participants retain a sense that the problem itself remains only partially understood.

In planning exercises, student teams working through standard procedures often converge quickly on similar center of gravity analyses and courses of action. The plans appear analytically sound and procedurally correct. Yet when teams are pushed to step back and examine the problem differently — for example, by asking what sustains the adversary’s system, how its leadership understands risk, or what enables it to keep fighting — the operational picture often changes. In several exercises, teams that stayed longer with the problem rather than the template reached markedly different conclusions about what the operation needed to achieve.

What initially appears to be a technical planning exercise therefore often depends heavily on which tradition of operational thinking shapes how the problem is framed. Over time, this can produce two closely related effects. First, the completion of planning products can come to substitute for serious engagement with the problem. Strategic to tactical intent may remain vague, yet appear stable because plans are procedurally coherent and internally consistent. Second, planning activity can gradually become decoupled from operational understanding, even as procedural rigor increases.

These effects are rarely intentional, but they are systematic. The problem, in such cases, is not a lack of discipline or effort, but that rigor is being exercised within a framing that no longer adequately reflects the character of the problem being confronted.

Seen this way, the unease Lyons captures may not signal a simple decline in operational art. It may instead reflect growing tension between different ways of thinking operationally that coexist — often unacknowledged — within contemporary planning practice. If so, the issue is not merely how well operational art is applied, but which understanding of operational art is implicitly favored when problems are framed, analyzed, and acted upon.

Three Traditions of Operational Thinking

Rather than treating operational art as a single body of knowledge, it is more useful to see it as a set of traditions of operational thinking shaped by different historical problems and assumptions about how operations work. Each directs attention toward different aspects of the same operational reality. What sometimes looks like a clash of methods is often a clash between different ways of focusing attention on the same operational reality. These traditions are not simply doctrinal variations: They represent different ways of understanding what matters in war, how coherence is achieved, and how action should relate to understanding.

One influential tradition is the Anglo-American center of gravity approach. Here, operational thinking emphasizes analytic discovery through decomposition: identifying an adversary’s critical capabilities, requirements, and vulnerabilities in order to locate decisive points of intervention. The strength of this approach lies in clarity and traceability. Complex situations are rendered intelligible through structured analysis, and improved understanding is expected to translate directly into improved action. Within this logic, better data, better models, and better tools promise better decisions — a logic that aligns naturally with contemporary interest in artificial intelligence.

A second tradition, rooted in German military thought, centers on the concepts of schwerpunkt and auftragstaktik. In this view, the decisive focus of an operation is not an objective feature waiting to be discovered through prior analysis, but a relational configuration established through judgment in context. Terrain, enemy disposition, friendly forces, tempo, and intent interact dynamically, and the task of command is to recognize emerging opportunities and concentrate effort at the right moment. Coherence is achieved less through analytic completeness than through situational awareness, initiative, and the ability to adapt as conditions change. Understanding, in this tradition, is inseparable from action and expressed through timely judgment under uncertainty.

A third tradition can be found in Soviet operational art, which approached warfare as a system to be orchestrated in depth. Emerging from the challenges of industrialized mass warfare and vast physical space, this tradition emphasized simultaneity, echeloned forces, and the deliberate sequencing of operations across space and time. Rather than seeking collapse through a single decisive node, coherence was pursued through sustained pressure on the adversary’s system as a whole. Material factors, force ratios, logistics, and tempo were central, and success depended on the capacity to coordinate large-scale operations over extended periods — generating cumulative effects and denying the adversary recovery.

Other traditions outside Western thinking — including those emerging from Chinese operational thought — start from different assumptions about how coherence in war is achieved. Chinese doctrine on systems destruction warfare and informationized local wars emphasizes disrupting the adversary’s operational system as a whole, rather than collapsing a single decisive node. Concepts such as shi — situational potential — focus on shaping the overall situation and gradually weakening how the opponent’s system functions.

Leadership research suggests that Chinese leadership practice is internally differentiated: Confucian and Legalist logics underpin rule-bound loyalty and hierarchical control at lower levels, while Daoist ideas of wu wei and paradoxical adaptation play a larger role at the higher levels. These traditions coexist and shape different modes of decision-making in complex environments. A doctrinally locked understanding of operational art — such as the one Lyons implicitly leans on — may therefore struggle against an adversary whose operational practice is shaped by multiple vertically layered logics rather than a single stable framework.

When a center of gravity framework is applied to an adversary whose operational logic differs fundamentally from the planner’s own, the analysis can begin from the wrong premise. Planners may then identify decisive nodes that appear compelling within the analytic framework but carry little meaning inside the adversary’s own operational system and therefor lack feeble impact. In that case the framework itself — not only its application — becomes part of the problem Lyons describes.

Each of these traditions continues to evolve in response to technological change, organizational learning, and shifting strategic contexts.

Two clarifications are worth making.

First, these traditions can overlap in form. A staff using a center of gravity template may still display excellent judgment under uncertainty, and a commander operating through mission command may still rely on careful analysis. The distinction here is not about which tools appear on the surface, but about the underlying ways of thinking that shape how commanders and their staffs see the relationship between analysis and action.

Second, problems arise when one tradition of operational thinking is treated as neutral or universally applicable — especially when others continue to shape practice in the background. When different traditions are mixed without being recognized as different, friction can appear as poor doctrine or weak rigor, while the deeper issue is a mismatch in underlying assumptions about what operational thinking is supposed to do.

As AI enters operational planning, it does not arrive in a neutral space. It inevitably amplifies whichever operational thinking tradition structures the planning process: accelerating certain forms of analysis, reinforcing patterns of attention, and shaping what is rendered visible or actionable. Understanding this dynamic requires more than improved tools or renewed doctrinal rigor. It calls for greater awareness of the assumptions that quietly shape how operational problems are framed, understood, and acted upon.

AI as an Amplifier of Operational Thinking

AI is often discussed in military contexts as either a transformative solution or a disruptive threat to operational planning. Both framings tend to obscure its most consequential effect. AI does not merely accelerate existing processes or automate discrete tasks: It participates in shaping how operational problems are perceived, structured, and rendered intelligible in the first place. Its influence is therefore not confined to efficiency or speed, but extends into the foundations of how planning and judgment are formed.

AI does not enter planning processes as a neutral instrument. It operates within existing understandings and traditions of planning and inevitably reinforces the ways problems are already framed and understood. Decisions about data selection, model design, training sets, and interfaces encode assumptions about what matters, what counts as a relevant signal, and how coherence should appear. As a result, AI systems tend to amplify certain ways of understanding a situation while leaving others in the background. This amplification is rarely explicit: It occurs gradually, through repeated interaction, as planners learn to trust and adapt to what the system presents as most meaningful, relevant, and useful.

Unsurprisingly, current applications of AI align most readily with the analytic center of gravity tradition. Pattern recognition, network mapping, predictive modeling, and visualization promise greater clarity and tempo, particularly in environments characterized by information overload. Within this logic, improved analysis is assumed to produce improved decisions, and AI appears as a natural extension of long-standing efforts to enhance rigor, consistency, and traceability in operational planning. From this perspective, AI seems well suited to addressing precisely the shortcomings Lyons identifies.

Yet this very alignment also reveals the limits of a purely analytic framing. Forms of operational understanding that rely on situational judgment, improvisation, and the dynamic orchestration of effects over time do not translate easily into machine-readable representations. The capacity to recognize emerging opportunities, act under deep uncertainty, or to sustain coherence through adaptive command relationships resists reduction to data-driven models alone. As AI-supported processes increasingly structure attention and expectation, such forms of understanding risk becoming less visible and less practiced.

The deeper risk, therefore, is not that AI produces incorrect answers. It is that AI subtly reshapes the relationship between human judgment and machine-generated coherence. When planning systems consistently present certain patterns, courses of action, or interpretations as more legible or compelling, there is a tendency for human understanding to adjust accordingly. Over time, planners may orient themselves toward what the system can represent and optimize, rather than toward the full complexity of the situation as experienced and interpreted through professional judgment. Coherence is then achieved not through understanding, but through alignment with system outputs.

If this is the case, the central challenge posed by AI is not primarily technical, nor is it resolved simply by placing a human in the loop. It is a professional challenge: how to remain aware of which forms of operational thinking are being amplified, which are being marginalized, and how this balance shapes operational judgment over time. Addressing this challenge requires more than improved tools or renewed doctrinal rigor. It calls for sustained attentiveness to the underlying assumptions that shape the foundations of operational art itself — and to how those foundations are being transformed as artificial intelligence becomes an increasingly active participant in operational planning.

From Doctrinal Rigor to Professional Awareness in Operational Planning

Lyons is right to emphasize the importance of doctrinal rigor, conceptual clarity, and disciplined operational thinking. In complex and contested environments, the absence of shared frameworks and professional standards tends to produce confusion rather than creativity or adaptability. Operational art has long provided a common language through which planners can coordinate action, integrate effects, and relate means to ends. This function remains essential, particularly as the tempo and informational density of planning continue to increase.

Yet rigor alone cannot resolve the tensions that arise when different traditions of operational thinking intersect within the same planning processes. When analytic, judgment-based, and system-oriented modes of reasoning are implicitly mixed — or when one dominates without recognition of the others — friction emerges that cannot be addressed through better application of doctrine alone. In such situations, planners may demonstrate procedural competence and conceptual fluency while still experiencing a growing disconnect between planning outputs and operational understanding.

What this moment increasingly demands is not simply improved mastery of a single framework, but professional awareness: the capacity to recognize which ways of thinking are shaping a plan — and which are being pushed aside in a given context. This does not mean abandoning established methods or professional standards of judgment. It means becoming aware of the assumptions that underpin different ways of thinking about operational problems — and how those assumptions shape what appears coherent, actionable, or decisive.

The practical implication is not simply that officers should be familiar with different traditions of operational thinking, but that they must learn to move between them. In our experience, the critical professional skill is the ability to stay with the problem longer than planning procedures normally encourage — asking not only what the template asks, but whether the template fits — and work with different operational logics before converging on a course of action. In exercises, teams that did this consistently reached different conclusions about what the operation needed to achieve than traditional planning teams. This ability to reframe operational problems may become even more important as artificial intelligence enters planning processes. If AI systems are designed around a single dominant planning logic, they risk reinforcing procedural convergence rather than helping planners explore alternative ways of understanding the operational problem.

From this perspective, operational art can be understood less as the application of a fixed method and more as a professional capacity to navigate: the ability to move deliberately between different modes of operational reasoning as situations evolve. This includes knowing when analytic decomposition is useful and when it obscures more than it reveals, when decisive focus must be discovered through analysis and when it must be constituted through judgment in action, when coherence is best achieved through precision, and when it depends on endurance, orchestration, and adaptation over time. Such movement does not weaken operational art. It is an expression of its maturity.

As AI becomes more deeply embedded in operational planning, cultivating this form of awareness is no longer optional. AI systems will continue to amplify particular ways of seeing, structuring, and valuing information. The question is not whether this influence can be avoided, but whether it will be recognized and governed through professional judgment. Maintaining meaningful human agency in planning, therefore depends less on keeping humans formally in the loop than on ensuring that the profession remains capable of understanding the environment in which both human and machine reasoning operate.

Lyons has opened a valuable discussion about how operational art performs under pressure. Engaging that discussion fully may require moving beyond debates about whether doctrine is being applied correctly, toward a clearer awareness of the different operational languages we already employ — often without naming them — and the ways emerging technologies will amplify them.

If operational art is to remain a genuinely professional practice, the challenge is not simply to restore rigor or refine tools, but to build planning cultures and education programmes that deliberately practice understanding different operational traditions — and that treat AI outputs as one input to that process, not as its conclusion.

The profession that built operational art from hard-won battlefield experience must now apply the same critical scrutiny to the tools and traditions it relies on — before an adversary, or an algorithm, does it first.

 

Anders McD Sookermany is a Norwegian military officer and professor at the Norwegian Defence Command and Staff College, where he teaches operational art, strategy, and planning. He is the editor of the Handbook of Military Sciences and leads the research program Making Sense of Military Operations.

Thomas Slensvik is an officer in the Royal Norwegian Navy and serves as course director for operational art at the Norwegian Defence Command and Staff College. He has extensive operational and doctrinal experience, previously serving as co-editor and custodian of the Norwegian Joint Operational Doctrine, and is currently pursuing a Ph.D. focused on military operational planning.

The views expressed are those of the authors and do not necessarily reflect those of the Norwegian Armed Forces, the Norwegian Ministry of Defence, or the Norwegian government.

Image: Lance Cpl. Kenneth Twaddell via DVIDS.

Become an Insider

Subscribe to Cogs of War for sharp analysis and grounded insights from technologists, builders, and policymakers.