Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.
Editor’s Note: This is the fourth in a series exploring key AI policy choices faced by the Department of Defense and Congress. Please also read the first, “Four AI Policy Choices Policymakers Can’t Afford to Get Wrong,” second, “Warfighters, Not Engineers, Decide What AI Can Be Trusted,” and third, “Operating AI in the Gray Zone: Drawing Clear Lines Before They Blur.”
Whenever the U.S. national security community becomes anxious about a new “transformational” technology, the plot tends to be a familiar one. Pundits warn that America will fall behind without serious, additional investment. Military and civilian leaders at the U.S. Department of Defense argue that doctrine must adapt to match the newly emerging character of war. Congressional staffers ping the Pentagon with questions about why it’s not moving faster. Before long, the entire defense establishment convinces itself that a revolution is underway, and it’s one that is both inevitable and necessary.
AI may be the latest chapter in this recurring story. Many insist that AI will fundamentally reshape warfare by enabling autonomous swarms that will blot out the sun, real-time targeting from the tactical to strategic levels of warfare, or autonomous undersea platforms that make the oceans transparent and can operate indefinitely. If the United States buys this vision of AI-enabled warfare, there is only one logical conclusion: The Pentagon must engage in the hard slog of building new formations, updating doctrines and operating concepts, and fundamentally redesigning the U.S. military around AI. If Pentagon senior leaders fail to do these things, the United States risks falling behind adversaries and competitors who realize the promise of AI sooner, move faster, and adopt it throughout their military formations more aggressively.
But this framing is as unhelpful as it is utterly binary. On one side sits evolutionary adoption that includes incremental modernization, which improves intelligence, logistics, and the “back office” administration that define large bureaucracies, including the Pentagon. On the other side sits a revolutionary redesign of the U.S. national security apparatus driven by the assumption that AI is about to change the nature of war — or already has. As with most things, the truth is probably both more nuanced and complicated. There can be little doubt that AI has revolutionary potential, but the Department of Defense should be relieved that it’s not yet revolutionary in practice. History seems to suggest that when the Pentagon bets its force design and doctrinal decisions on specific technologies that appear transformative, it usually overcorrects. The results are often costly, destabilizing, and difficult to unwind. The Pentagon’s current approach, which one might describe as “slow and steady,” may primarily be the result of bureaucratic inertia, but it’s not a bureaucratic flaw. Indeed, it may be the only thing saving the Pentagon from its own enthusiasm.
The Graveyard of American Defense Revolutions
Although few would call the Department of Defense an early adopter, its modern history is littered with examples of redesigning itself around technologies that promised to alter the essence of warfare, only to discover that the underlying assumptions were wrong, or at least vastly overstated. Few of these examples are more illustrative, or more cautionary, than the Pentomic Army of the 1950s.
A product of Cold War doctrine, the Pentomic concept fundamentally reorganized the U.S. Army around the belief that the distribution and employment of tactical nuclear weapons would define the next war. The traditional, regimental-based system was scrapped and replaced by five “atomic-ready” battle groups that were optimized for dispersion and survivability. The Army rewired command relationships, developed completely new logistical systems, and revised its entire warfighting concept around tactical nuclear technology. The challenge, of course, was that the technology did not, and could not, deliver the battlefield effects that the Army assumed. Thankfully, tactical nuclear weapons proved politically unusable, strategically incoherent, and operationally complicated and their usage was soon abandoned by the United States. The Army briefly considered retaining its new force design but soon realized it was untenable: Battle groups were too small to sustain prolonged combat operations and command posts were too fragmented to effectively direct maneuvering units. In short, the Army discovered it had sacrificed core fighting capability in pursuit of an imagined future. As a result of the Defense Department’s Planning, Programming, Budgeting, and Execution process, the Pentomic transformation took nearly a decade and left the Army less ready to meet real-world missions during that time. Capability shortfalls from the Pentomic years were so severe that they shaped debates about the proper design of U.S. ground forces well into the Vietnam era.
Sadly, the Pentomic story is not a unique one. Several decades later, the “revolution in military affairs” of the 1990s promised a sensor-to-shooter revolution that would change everything known about warfare. Instead, while precision-guided munitions enabled both airpower and targeted strikes, early operational concepts and doctrinal shifts predicated on these technologies had to be scrapped. In a tragic redux of its Pentomic efforts, the Army started making massive force design alterations in the early 2000s to meet its Future Combat Systems vision of automated and unmanned formations. This technology simply did not mature, and the eventual cancellation of Future Combat Systems cost tens of billions of dollars and left a conceptual vacuum that some argue the Army is still trying to fill. Even Air-Land Battle, which was arguably one of the most successful doctrinal updates in modern American history, was premised on assumptions about deep-strike capabilities and synchronized maneuver that were only really relevant to a single operational theater and materialized more in ornate PowerPoint diagrams than they ever did on the battlefield.
Each of these efforts shared a common flaw: They were built around a technology or series of technologies that appeared revolutionary, but whose actual battlefield utility failed to live up to theoretical hype. The risk today isn’t that the Pentagon will ignore AI — all evidence points to the contrary. The real risk is that the Department of Defense will indulge, yet again, in its modern habit of prematurely organizing around a technological future that doesn’t yet exist.
Revolutionary Redesign Requires Revolutionary Confidence
To use AI to justify a reorganization of the U.S. armed forces, Pentagon leadership should require extraordinary confidence. This confidence should include both assurance of the revolutionary potential of AI and its near-term reliability under wartime conditions. That confidence doesn’t exist among senior defense leaders today. Nor should it.
AI models currently display brittleness in ways that matter both tactically and operationally. Large language models, which currently proliferate throughout the Department of Defense, famously hallucinate with shocking levels of self-confidence. Computer vision systems tend to fail when exposed to simple, adversarial camouflage. In one test, marines advanced on an autonomous weapon system in a cardboard box and remained undetected by its sensors. Autonomous systems, by their very nature, require exquisite sensor fidelity and clean data streams, which are two things no battlefield can guarantee. Even commercial AI systems, which have the benefit of abundant cloud computing, troves of clean data, and uncontested environments can suffer failure modes that are unacceptable in combat.
For those sitting outside the Pentagon, rapid adoption of AI by the commercial sector is often cited as evidence that the Department of Defense is falling behind, must move faster, and is not keeping up with emergent commercial tools that would vastly improve its warfighting mission. But commercial uptake only demonstrates how useful AI can be in predictable, data-rich environments where failure modes are measured in lost time or revenue. Modern combat simply does not mirror these requirements for success. Data is messy or absent, targeted sensor deception is not just routine, but a practiced art, and system failures could result in American service members traveling back to the United States in flag-draped coffins. Given this, Pentagon leaders could be easily forgiven for their skepticism that tools optimized for marketing analytics or driving click rates will seamlessly transfer to kill chains and fire control systems.
There is no question that AI will meaningfully accelerate intelligence analysis, logistics forecasting, and administrative workflows, both in the Pentagon and on the battlefield. But these gains, while important, are evolutionary. They do not justify new force structures nor warrant upending the deeply ingrained service cultures which guide current force design and doctrinal choices. There may yet be sufficient cause to reorganize the American military when a technology — like AI — is so reliable, mature, and operationally decisive in combat that avoiding these difficult design choices would constitute strategic malpractice. Before AI becomes that technology, it needs additional time to incubate. Counterintuitively, that may very well prove advantageous for the United States.
Transforming Time into an Advantage
The Department of Defense should harness AI aggressively wherever it is ready for adoption but move deliberately whenever it is not. To do so, the Pentagon will need more than speeches about responsible adoption or principles to guide procurement. The U.S. military will require legislative, structural, and bureaucratic choices that help prepare for an AI revolution rather than try to incite one.
In next year’s National Defense Authorization Act, Congress should create a new program of record to modernize the Defense Department’s information technology architecture to prepare for AI adoption. More than another flashy AI provision from Congress in next year’s annual policy bill, the Pentagon needs to get about the unglamorous and necessary work of replacing brittle and siloed information technology systems that cannot safely host advanced models. If the AI revolution ever comes, it won’t be able to arrive inside the 1970s-era mainframes or balkanized service networks that constitute the vast majority of the Defense Department’s information technology systems. This AI Readiness and Integration Fund should include multi-year funding to ensure that projects do not stall out as Pentagon leadership rotates and priorities shift.
Additionally, Congress should require safety and security-by-design certifications for any AI system procured by the Department of Defense as part of next year’s National Defense Authorization Act. Articulating the specifics of these certifications to make them operationally relevant will likely require consultation with industry experts, but they should include repeatable, standardized testing and evaluation regimes for AI under contested conditions. This should include testing resilience to operational deception challenges, electronic warfare stress tests, and cyber-resilience. No AI system should enter a kill chain without passing an independent, red-team evaluation that actively attempts to break the model the Pentagon is considering for purchase.
Finally, the Trump administration should direct increased and iterative field experimentation and operator testing of AI systems by the military services and combatant commands. These experimental pockets should deploy new AI systems that help warfighters conceptualize what future doctrinal changes might be required when, and if, the AI revolution arrives. They should also use these pilot efforts to expose failure modes before acquisition decisions lock in ill-founded assumptions about promised capabilities. In the interim, the secretary of defense and the secretaries of the military departments should slow-roll any proposal that attempts to reorganize the services or the joint force structure around AI until operators demonstrate trust and familiarity with these new tools.
When the prevailing wisdom is “more AI, all the time,” these steps might seem timid. But they are not — they are prudent. These actions help create time and space to prepare for a future moment when AI may yet revolutionize modern warfare and they recognize that the U.S. military’s most important resource is not technological novelty, but rather operational reliability. AI adoption should accelerate when, and only when, the infrastructure is prepared, the nation’s warfighters are confident, and the technology is as reliable as possible. Until then, there’s lots of work to be done to prepare for the revolution that isn’t quite here yet.
Slow Is Smooth, and Smooth Is Fast
The Pentomic Army isn’t a relic of history — it’s a warning. If there’s anything that a trip through the graveyard of failed defense revolutions teaches the careful observer, it’s this: When the United States rushes to engineer a technological revolution in warfare, it tends to get the timing wrong, the concepts wrong, or both. The slow-and-steady approach doesn’t reject the promise of AI-enabled warfare but rather helps prepare for it.
While it waits for the revolution that many promise, the Department of Defense should use AI today where it makes the most sense and is proven: intelligence and data analysis, logistics optimization, training, and administration. These are the places where AI has already yielded gains in the commercial sector and where the cross-over application into the daily work of the Pentagon is most salient. These evolutionary gains will strengthen the military’s warfighting capabilities now, while buying time for the technology to mature to the point where revolutionary change becomes not just possible but necessary.
But if the AI revolution arrives, it will do so on its own timeline — not that of the Joint Staff’s PowerPoint decks. The job of Congress, and senior leaders at the Department of Defense, is to ensure that when that moment finally comes, the American military is ready for it. This readiness shouldn’t be obtained because the Defense Department reorganized early, but rather because it prepared wisely. AI policy is complicated, but preparing for AI adoption at the Department of Defense doesn’t have to be. The old U.S. Special Forces adage is both helpful and instructive here: Slow is smooth, and smooth is fast. Until the revolution gets here, this wisdom should guide the decisions of policymakers and practitioners alike.
Morgan C. Plummer is currently a vice president at Americans for Responsible Innovation, a non-profit public advocacy group based in Washington. He previously served as a professor of practice at the U.S. Air Force Academy, a defense and security expert at Boston Consulting Group, and a senior defense official at the U.S. Department of Defense. He is a former U.S. Army officer and served in various command, staff, and Pentagon assignments and deployed multiple times to Iraq. He can be reached at morgan@ari.us.
**Please note, as a matter of house style, War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.
Image: Sgt. Andrew King via DVIDS.