When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

The Hidden Cost of a Missile: Why the Headlines Get Cost Wrong

November 18, 2025
The Hidden Cost of a Missile: Why the Headlines Get Cost Wrong
The Hidden Cost of a Missile: Why the Headlines Get Cost Wrong
Erik Schuh
November 18, 2025

Since late 2023, the U.S. Navy has fired nearly $1 billion worth of munitions to protect ships in the Red Sea from low-cost Houthi drones and missiles. The headlines that followed have rightly pointed out the absurdity of firing multi-million dollar missiles against cheap drones. But those headlines miss the bigger picture.

Behind each intercept lies a vast and expensive ecosystem: the carrier strike group and its escorts, the logistics tail that keeps them fueled, the training pipeline for crews, and the command-and-control networks that make the engagement possible. In reality, the cost of downing each drone is not a few million dollars, but hundreds of millions in operational and sustainment expenses. Yet, that pricey shot may have saved an even greater loss: a $2.5 billion Arleigh Burke-class destroyer and, especially, its sailors.

This tension between the cost of an action and the value of its outcome sits at the heart of a Pentagon buzzword: “cost-per-effect.” What exactly counts as “cost” and how we define a meaningful “effect” is not as simple as the headlines suggest. If the Department of Defense gets cost-per-effect wrong, it risks favoring systems that appear cheap on paper but demand costly infrastructure, or buying affordable systems that are so limited operationally that they undermine the very effect they’re meant to produce.

 

 

What Does “Cost” Really Mean? 

When analysts talk about the cost of a capability, it turns out they might not all be speaking the same language. The most commonly cited metrics — shown in those headlines — tend to be acquisition costs. For example, the average procurement unit cost or the program acquisition unit cost are standard cost metrics that roll up development and procurement dollars divided by the number of units. These figures are useful for budgeting but often don’t capture what it will cost to actually use the system.

Take the Navy’s SM-2 and SM-6 missiles used to shoot down Houthi drones. Their price tags, roughly $2.2 million and $4.3 million respectively, are only part of the picture. The real cost includes the destroyer that fires them, the crew that mans the system, the fuel and maintenance to keep it at sea, and the logistics network that make each shot possible. While no one should allocate all these costs to one engagement, ignoring them entirely risks dramatically underestimating what it truly costs to achieve even a simple tactical effect like intercepting a drone.

Part of the problem stems from there not being a standardized cost approach within the Department of Defense for comparing operational costs across capabilities. Each service applies its own methodology, and even then, those cost estimates vary widely. Additionally, multiple offices can be considered authoritative sources for cost numbers on the same capability. This is where independent cost agencies like the Office of Cost Assessment and Program Evaluation or the Air Force Cost Analysis Agency come into play. Whereas program offices are incentivized to understate costs, these independent agencies develop Non-Advocate Cost Assessments — a far more realistic foundation for comparing capabilities.

Defining “Effect” 

If defining cost is hard, defining effect is even harder. Joint doctrine defines an effect as simply the outcome of an action. The definition is broad because an effect can virtually be anything. Ranging from using force (destroying a target) to other means (cyber operations) or tactical (one drone shot down) to strategic (preventing an enemy from winning the war). To compound the problem, some effects are intangible or only observable in the long run. How, for instance, do analysts measure the effect of a deterrence operation?

In the Red Sea scenario, there were multiple effects ranging from tactically defending naval vessels to strategically keeping shipping lanes open. Focusing narrowly on the missile engagements, the tactical effect was clear: protect U.S. forces from drones. To do that, the Navy had a variety of capabilities: long-range missiles, gatling guns, small arms, and non-kinetic weapons. While the smaller weapons were cheaper, these are the last line of defense before hitting the destroyer. It turns out a layered defense is more effective and therefore the first line of defense — missiles — were the first and best option. Also, no commanding officer is going to bet their $2.5 billion ship — and hundreds of sailors — on the cheapest possible option. They will use the best available tool, even if it’s expensive.

Doing Cost-per-Effect Right 

To use cost-per-effect as a meaningful tool, analysts need to be honest about the full cost of delivering an effect and not just what a missile costs on paper. That starts with a clear definition of the desired effect — in this case, protecting U.S. naval forces from aerial threats — and then identifying every cost necessary to achieve it.

Of course, not all costs should be included. One way to do this is to get away from gross acquisition costs and move towards a comparative cost framework. Gross costs still matter for budgeting, but cost-per-effect analysis should illuminate trade-offs, not what to budget. One cost fallacy that persists in the Pentagon is the attachment to sunk costs irrespective of whether the program continues to be valuable. For example, missiles already in inventory have already passed their research and development phase, so those sunk costs shouldn’t distort future cost-effectiveness evaluations. The Army has recently shown how to correct this mindset by cutting multiple programs that no longer align with the modern fight.

The next challenge is distinguishing direct from indirect and common costs. While direct costs are directly associated with the capability, such as the tanker aircraft that refueled it or the sensors that guided it, indirect costs cannot be identified directly to the unit or personnel that support the system being analyzed. An example would be the costs associated with the installation support among the home port of that Navy destroyer. While they help make that destroyer operational, it is too difficult to include all indirect costs in the analysis. Common costs on the other hand are those costs that are the same across all capabilities being compared. An example would be the satellites used for command and control among all assets in a carrier task force. Lastly, there are negligible costs that represent less than 1 percent of the overall cost. Negligible costs help determine how far back the analyst goes in the pipeline of direct costs to consider. Eventually the costs become so insignificant that the effort to obtain those numbers doesn’t match the change in value to make it worth collecting.

Beyond the Battlefield

So far, this framework has focused on a single dimension: how a capability performs operationally. But cost-per-effect isn’t just about technology. It should also consider whether changes in tactics, techniques, or procedures could achieve the same outcome. In some cases, adapting how forces fight can yield greater cost-effectiveness than fielding new capabilities.

Another overlooked dimension lies upstream of cost analysis: the ability to actually produce and sustain capabilities at scale. Ukraine has reminded the defense industry that cheap, adaptive, and scalable solutions win wars of attrition. The Red Sea scenario highlighted the same lesson: the cost difference between million-dollar missiles and inexpensive drones tell only part of the story. What’s often overlooked is those missiles take to produce and whether production can scale quickly. Upstream of cost analysis lies the ability to build capabilities that can meet tomorrow’s fight. A weapon’s cost-effectiveness is meaningless if it cannot be produced quickly or in sufficient quantity. True cost-per-effect must therefore account for not only how efficiently a weapon performs in combat, but also how rapidly it can be produced, replenished, and adapted under wartime conditions.

Why It Matters 

Done right, cost-per-effect analysis can drive smarter investments and operational choices. A comprehensive approach may reveal that long-range missiles are more cost-effective in certain scenarios, because their superior effectiveness outweighs their high cost. Conversely, it may show that cheaper systems or even change in tactics achieve the same outcome at far lower cost.

But until analysts do the work, those headlines over million-dollar missiles versus thousand-dollar drones are just noise. Sneering at $4 million missiles misses the broader truth: Those missiles may be the best bad option available. The real problem isn’t that commanders are using expensive interceptors. It’s that they don’t have cheaper alternatives that are equally effective.

 

 

Erik Schuh is an Air Force officer serving as an operations research analyst. The views expressed are those of the author and do not reflect the official guidance or position of the U.S. government, the Department of Defense, the U.S. Air Force, or the U.S. Space Force.

**Please note, as a matter of house style War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.

Image: U.S. Navy photo by Mass Communication Specialist 2nd Class Jonathan Nye

Warcast
Get the Briefing from Those Who've Been There
Subscribe for sharp analysis and grounded insights from warriors, diplomats, and scholars.