How do you know if you’re winning a war? It turns out it is much more complicated than you’d expect.
I recently returned from the Naval Postgraduate School in California where I met with other scholars to put together a book on “assessing war” – the efforts that militaries or other fighting forces make to determine whether they are winning or losing a war. I left California seeing the problem in quite a different way from what I’d expected or from what the organizers had intended.
Early discussions about a theoretical framework suggested a common way of assessing our historical case studies: identify the political goals that were to be obtained by the war, then identify the benchmarks that the military in question used to measure its progress toward that goal. Then we were to look at the information that the military collected to determine if those benchmarks had been reached. Finally, we were to consider the incentives that the desire to collect this information created for the military. Often those incentives were perverse, as in the case of the body counts in Vietnam.
Each author gave a brief summary of his or her topic and as I listened to historians talk about assessment within most American wars from the civil war through today, it struck me that the historians were describing very quantitative data-heavy methods of assessment for the wars since Vietnam.
We are past the time when progress could be measured by looking at the movement of a front line on a map. However, at some point the United States military—like the rest of the country—fell in love with data. Lots of data. Data in the form of numbers: munitions expended, body counts, percentage of the country with electricity, poll numbers, number of dollars spent on development projects, etc. The U.S. military loves the word “metrics.” Metrics, of course, allow us to measure things and we believe that anything can be measured. So now the military gathers a whole slew of data that it can use to measure things so we can understand what is going on. The problem is, we still don’t understand what is going on. One of the organizers of the conference mentioned that in 2009 he had been offered access to a drive containing “metrics” data on the Afghanistan war… 47 terabytes of data.
As I sat in that room in Monterey, I got to thinking about the distinction in the intelligence business between secrets and mysteries. Secrets are questions to which there is a factual answer. An example is “Where is Ayman al-Zawahiri?” There is an answer to that question, we just don’t know what it is yet. By contrast, mysteries are questions to which there is no factual answer. An example might be “What will Ayman al-Zawahiri do next week?” (Note that this is quite different from “What does Ayman al-Zawahiri intend to do next week?”) There is no factual answer to this question because it depends on future events, including interaction with other human beings, and the future is always in motion.
The way to find secrets is to collect more data and somewhere in the mass of data will be the secret or pieces of a secret which can be assembled like a puzzle. In the case of mysteries, however, collecting more data is typically the wrong thing to do. More data often makes it impossible to see the forest for the trees. Instead the answer is to bring in experts, people who have great experience and insight. Of course, the United States Government has managed to badly debase that term. However, with a great deal of luck experts can be found who have coup d’oeil. This is a characteristic that Clausewitz described in On War as “the rapid discovery of a truth which to the ordinary mind is either not visible at all or only becomes so after long examination and reflection.” Not everybody has this ability. Napoleon certainly had it and it was that fact that made him the transcendent military commander in Clausewitz’ eyes. Clausewitz explained:
When all is said and done, it really is the commander’s coup d’œil, his ability to see things simply, to identify the whole business of war completely with himself, that is the essence of good generalship. Only if the mind works in this comprehensive fashion can it achieve the freedom it needs to dominate events and not be dominated by them.
I believe that the United States military, indeed, the entire Government, has allowed itself to be dominated not by events so much, perhaps, but by hubris, the belief that its legions of staff officers can crunch the numbers, connect the dots and come up with a scientific answer to any question that it desires to answer. In short, it has has treated the question of assessing wars like Vietnam or Afghanistan, the question of “Are we winning?” as a secret.
Instead, perhaps the question of whether we are winning is a mystery. In short, perhaps there is no factual answer to the question. This corresponds with one of the most important things that we know about war: it is interactive. It is a duel. Or, to use a different metaphor, the enemy gets a vote. Indeed, in the kinds of wars we have fought recently the enemy not only gets a vote but multiple enemies get votes as do various factions of the civilian population.
If I am correct that the assessment of the kind of wars that the United States fights these days is a mystery, then how can that assessment be done? If the commander does not have coup d’oeil—and most do not, including many of the most capable—then he will need assistance. In any event, even a commander with this rare skill probably should have assistance. Force commanders, like all leaders, are charged with getting complicated and difficult things done and they will inevitably see the situation through the lens of their own optimistic intentions. This is normally a functional behavior. After all, taxpayers pay them to be Pattons not Hamlets. However, it also inclines them toward motivated bias—seeing what they want to see. (Note that motivated bias is not intellectual dishonesty. It is simply an analytic pathology.)
There is an important role for outside experts—genuine experts—who are not in a command position and who do not have the force commander in their chain of command. These might be commanders from previous wars who know what winning and losing feel like, they might be respected strategists, experienced intelligence analysts, historians, sociologists, perhaps even investigative journalists who have a sixth sense for BS or a good story. Note that these experts cannot be exclusively intelligence experts because this is a problem of net assessment. This type of assessment entails understanding the interactions of three things: the enemy (normally under the purview of intelligence analysts), the friendlies (normally under the purview of the commander), and all the other actors in the battlespace (nominally under the purview of intelligence analysts but frequently ignored).
It’s a complicated job, to put it mildly. This is why the experts should not be force fed a diet of PowerPoint slides, or buried under 47 terabytes of data. They could have access to all the data they wanted, but it shouldn’t be inflicted upon them. Rather, they should be left alone to, as has been written of Ulysses S. Grant, comprehend problems “in all their simplicity.”
What does this process look like? Who knows? As a matter of psychological reality and of the structure of these problems (they are mysteries without demonstrably correct answers), the outputs can’t be fully justified in a logical or mathematical sense. Is this a weakness? We Americans are probably tempted to say that it is. Yet, that is us looking through our 21st century cultural blinders. Clausewitz would scoff at us.
With 47 terabytes of data, any drone can produce a convincing case for pretty much anything. A wise commander would prefer the judgment of genuine experts with insight even if they can’t enunciate why they know what they think they know, even if they can’t show their work.
Mark Stout is a Senior Editor at War on the Rocks. He is the Director of the MA Program in Global Security Studies at Johns Hopkins University’s School of Arts and Sciences in Washington, DC.