Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.
Editor’s Note: This is the first in a series of articles. Subsequent articles will examine answers to each of the four questions laid out below.
As Congress works to hammer out the final details of the National Defense Authorization Act, most of the attention will fall on tactical fights. The fights around final language and program funding matter, and the final set of AI-related provisions will help push AI policy for the Department of Defense in new and important ways. However, most of these provisions also fail to grapple with a universal truth of defense policymaking: Early decisions tend to harden quickly.
To quote Air Force iconoclast John Boyd, “what is doctrine on day one becomes dogma forever after.” The truth is that this year’s law doesn’t just set the amount the Pentagon will receive for swarming demonstrations. It will also set the initial AI defense policy trajectory for the next quarter-century.
Congress can certainly be forgiven for looking past the forest and focusing on the trees. The public discourse around AI and national security in the United States, beyond theoretical discussions about the ethics of lethal autonomous weapon systems or how AI might enable intelligence operations, is surprisingly meager. Many headlines related to AI and national security tend to be about either “killer robots” or a hypothetical “AI arms race.” To top things off, the complex nature of the Pentagon’s infamous Planning, Programming, Budgeting, and Execution cycle only further exacerbates the tendency to fixate on individual line items, programs, and statutory provisions. In short, deep thinking about big questions is largely at odds with the very system meant to provide the answers to those questions.
Despite these structural challenges to answering important, path-dependent questions, policymakers throughout the government should start addressing them before it accidentally locks in answers. As Congress finalizes this year’s defense bill and the Pentagon begins developing next year’s AI proposals, four key questions deserve careful debate and clear answers. Without either, the next two authorization acts may end up pouring concrete around answers Americans are liable to regret.
Who Gets to Define What’s “Trustworthy” at the Pentagon?
President Donald Trump’s recent AI Action Plan instructed the Department of Defense to develop “responsible AI,” and nearly everyone agrees that militaries need “trustworthy AI.” However, neither the action plan nor popular sentiment has defined what that means, nor, more importantly, who should define it.
If technical experts and program managers in the Office of the Undersecretary of Defense for Research and Engineering or military services get to define what is trustworthy, it will likely mean an emphasis on technical security (e.g., protected from data poisoning). Such a definition probably means massive investments in verification and validation pipelines that will emphasize incremental improvements of AI that can be trusted by the testing and evaluation community.
If operators throughout the military services set the definition, then adoption and diffusion of AI throughout the Department of Defense will be predicated on whether servicemembers are willing to use it. This definition probably drives investment in user interfaces, rapid integration into tedious tasks, and a proven robustness and reliability on the battlefield that currently seems quite distant.
Finally, if policy wonks or lawyers inside the Office of the Secretary of Defense get to define what is trustworthy, then this is likely to drive investments related to explainability, human control, and coalition interoperability.
Not all of these definitions are mutually exclusive, but they are often at odds when it comes to prioritizing investments, research agendas, and doctrine. Even more provocatively, what if it turns out that all three definitions of “trustworthy” are required for the broad diffusion of AI throughout the U.S. military? Who will be responsible for assembling the strategy to ensure that all three definitions are eventually met, and who will get to choose in what order they are prioritized?
What is the Role of AI Beyond War?
Although most current debates frame it this way, AI doesn’t just have to support warfighting. How and whether AI is integrated into other instruments of national power remain largely unaddressed, both by the Trump administration and Congress. Given the inherent dual-use nature of AI, it seems plausible that AI’s impact might be most acutely felt in the persistent “gray zone” of peacetime competition rather than a “hot” armed conflict.
What is AI’s role in information operations, economic statecraft, cyber campaigns, or coercive diplomacy? When the line is deliberately blurred between war and peace, and a diverse cast of actors extending far beyond those inside the Department of Defense, who gets to decide whether AI will be used as a new tool of influence or deliberately constrained to avoid unnecessary escalation? Cyber operations posed many of these same challenges when they first emerged a decade ago, and defense planners and strategists are only now getting around to finding answers. If future defense authorization laws or new defense strategy documents normalize AI as part of “gray zone” conflict, the current era of great power competition may only harden. If those same documents constrain the use of AI, escalation may abate, but at the cost of ceding U.S. influence around the globe. Either choice will start to build institutional habits that will prove difficult to unwind.
Will AI Augment the Military Services, or Redefine Them?
When it comes to its use at the Department of Defense, should AI be thought of in terms of evolution or revolution? This isn’t just semantics — it speaks to the very heart of how the military services man, train, and equip to execute their war-fighting missions.
If AI is evolutionary, then existing weapon systems and organizations simply become far more efficient and effective. So far, the Pentagon appears to be taking this path by treating AI as a tool to fine-tune intelligence gathering, streamline its back-office functions, improve targeting, and optimize its global logistics challenges. Treating AI as a force multiplier certainly has its value, but what if U.S. adversaries and competitors treat it as something more than that? What if their tactics involve using AI to enable swarming tactics, developing fleets of fully autonomous submarines that can stay at sea indefinitely, or enable offensive cyber operations that fundamentally threaten both military and civilian personnel?
If the Pentagon continues to embrace AI as a force multiplier, it risks being surprised by doctrinal shifts abroad. If it embraces the revolutionary potential of AI, it will need to undergo the hard, painful work of redesigning force structures and revising operational concepts that strike at the very foundations of well-ingrained, service identities. Whichever framing policymakers choose will drive acquisition priorities, recruiting and retention, and alliance planning for decades.
Who Will Hold Power Over Military AI?
During the advent of nuclear weapons, agents of the state monopolized all means of production and development. This initial monopoly by the U.S. government allowed it to uniquely shape and guide the diffusion of nuclear technology and its attendant regulation. In the age of AI, things have turned on their heads.
Development and training of frontier AI models is almost exclusively driven by an exceedingly small number of commercial actors, and the U.S. government is a “taker,” not a “maker,” of this emergent technology. Additionally, startups, cloud providers, and open-source communities are pushing the field forward faster than the U.S. government can regulate. That means that the Pentagon is faced with a profound question: How much should it rely on industry actors to provide, adapt, and even operate AI systems in the future? Continuing to lean heavily on the frontier labs will replicate the same choices that birthed the current military-industrial complex.
This choice is likely to accelerate adoption but also create dependency on actors that may not always be fully aligned with national security priorities and have necessarily different incentive structures. Pivoting to bring national security AI development “in-house” to the U.S. military will sacrifice speed since the Pentagon has neither the talent, infrastructure, nor funding to achieve the kind of exponential model growth in a year that frontier labs are achieving seemingly every week. Hybrid models of governance that include joint ventures, public-private labs, or defense-specific, open-source frameworks are possible, but every choice sets a further precedent as to who holds real leverage in future, AI-enabled conflicts.
The Right Answers Will Require Deep Thinking
Even though Congress has started its final round of markup on this year’s versions of the National Defense Authorization Act, there is still time to engage with each of these questions and start to deliberately align specific statutory provisions and report language with preliminary answers. These questions are hard, and almost all of them are ones upon which reasonable people can disagree. But the risks of ignoring them are too great. If the long history of defense policy has taught policymakers anything, it’s that the dilemmas they duck today will be the path dependencies they live with tomorrow.
Once this year’s National Defense Authorization Act is completed and passed into law, the entire national security community should prioritize engaging in thoughtful, inclusive, and robust debate to answer these questions. Unfortunately, the easiest, and perhaps the most likely, outcome is for both the legislative and executive branches to become immediately preoccupied with the day-to-day sprint of preparing for next year’s legislative agenda and set of budgetary priorities.
The temptation to treat AI as just another line item — just one more topic in a myriad of defense topics to be resolved through routine processes — is real, but it should be resisted. Over the next year, Congress, think tanks, advocacy organizations, policy shops, and senior leaders throughout the national security enterprise should develop informed strategies by taking the time to pause, reflect, and debate. For once, the most urgent task is not to sprint faster, but to ask the right questions before the United States locks in the wrong answers.
Morgan C. Plummer is currently a senior policy director at Americans for Responsible Innovation, a non-profit public advocacy group based in Washington. He previously served as a professor of practice at the U.S. Air Force Academy, a defense and security expert at Boston Consulting Group, and a senior defense official at the U.S. Department of Defense. Morgan also served as a U.S. Army officer, where he served in various command, staff, and Pentagon assignments and deployed multiple times to Iraq. He can be reached at morgan@ari.us.
**Please note, as a matter of house style War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.
Image: Midjourney