Living With Fog and Friction: The Fallacy of Information Superiority
War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty. A sensitive and discriminating judgment is called for; a skilled intelligence to scent out the truth.
— Carl von Clausewitz, On War (Paret translation)
For those individuals searching for a key to the science of warfare, Carl von Clausewitz is a dash of cold water. The wily Prussian General wrote extensively about the form of European warfare that he had experienced. In contrast to Antoine Jomini, who would attempt to derive a system of warfare a generation later, Clausewitz laid out a philosophy of warfare much more attuned to the uncertainty and chaos that is part and parcel of military operations. Jomini’s writings were firmly embedded in U.S. military studies and heavily influenced both Union and Confederate officers. The preference for a scientific, Jominian system of warfare is deeply embedded in the U.S. Department of Defense and the prevalence of “system-centric” concepts ebb and flow in an irregular manner through service concepts and doctrine. There is a deeply embedded discomfort with the idea of uncertainty and the messiness associated with the best-laid plans, and the so-called fog of war is wildly unsettling. Unfortunately, the uncertainty described by Clausewitz is a staple of real military operations, even in peacetime, and wishing it away is a futile endeavor. Given a choice between trying to engineer a way out of reality and training our personnel to deal with uncertainty, the answer should be obvious.
A look at the services’ future-oriented concepts is revealing in this respect — there is no shortage of concepts that have as their goal to eliminate the fog of war and obliterate friction with the “seamless” application of some new technology. This trend is intensely unrealistic, for it risks inculcating a military hierarchy which believes the flimsiest of sales pitches and will be unable to deal with the reality of warfare when it shows up, shrouded in smoke, beset by friction, and showered in uncertainty. The emphasis is in exactly the wrong place, focusing on unproven or undeveloped technology when we should be focusing on training our personnel for the uncertain environments that they shall surely face.
Information Superiority. The operational advantage derived from the ability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary’s ability to do the same.
Information superiority and its cousin information dominance found their way into the defense lexicon some time ago. Defined in JP 1.02 (the Joint Dictionary), information superiority is treated much as any other type of superiority, implying a zero-sum game. If I have air superiority, then by definition my adversary does not. Admittedly, there are conditions where neither side can develop superiority, but the basis of “superiority” is very simple: It is a symmetrical measure whereby either nobody has it, or only one side does.
On the face of it, a zero-sum game should make the need for information superiority self-evident, in an “I have it and you don’t” kind of way. But the mere possession of information is not actually an indication of superiority over an adversary because information is merely a warfighting commodity, often of questionable provenance. Information possession is not so much an end in itself as one means among others by which decision-making is enabled. The utility and effectiveness of information pathways are directly related to the cognitive framework where the information ends up and the command structure by which decisions are translated to action. In short, the presentation and dissemination of information cannot be separated from the humans that use it.
The OODA Loop
Information is merely an enabler for decision-making; it feeds into a decision-making model but does not substitute for it. One commonly accepted model is the OODA Loop, defined by USAF Col. John Boyd. The loop characterizes a repeating cycle within the cognitive process: observe, orient, decide, and act. Information from a variety of sources, cross-linked with other cognitive inputs such as experience and cultural traditions, feeds into the “observe” and “orient” portions of the loop. Orientation is critical — it provides the context by which information is filtered and processed in the human brain. This combination of observation processed through a contextual lens enables a “decide” element that feeds back into observation and feeds forward into the “act” element. The outputs of an OODA loop are unique to the decider because of the contextual lens.
The loop was originally designed to describe cognition in a one-on-one basic fighter maneuver (BFM) challenge with two identical fighters starting from a neutral position and trying to achieve a kill in a completely symmetrical fight — the same weapons, same airframe, same performance, and pilots from the same air force in a constrained volume of airspace where the fog of war is strictly limited. As the OODA loop scales up, it leaves the confines of a single organism and becomes a linked network of loops that is dependent on some form of external communication rather than internal cognitive processes. It necessarily becomes more susceptible to the fog and friction associated with warfare as the number of loops proliferates and as the communication requirements multiply.
The effect of a focus on information superiority is to emphasize the inputs into the loop rather than the outputs from it, making the completely unwarranted assumption that the primary determinant of decision quality is the amount or quality of the input rather than the nature of the decision-making process and the links to effective action. The former could be a function of information technology and communications, which are things that one could buy or build — the “decide” and “act” loops are a function of the way we educate and train. A well-trained airman can function effectively (if not optimally) in the face of incomplete or false information whereas high-quality information flows can be rendered useless by inappropriate actions by poorly prepared or trained decision-makers. The emphasis on the inputs hides the fact that the measures of effectiveness for an OODA loop are the outputs, where decisions must be made, coupled to actions, and communicated.
Asymmetry
OODA loops are not symmetrical, meaning that the adversary’s OODA loops do not look like ours and do not operate in direct opposition. The opposing OODA loops are not tightly coupled to each other and are subject to many events and variables that lead to effects on the battlefield. From a standpoint of two military forces acting in opposition, everything about their decision-making is asymmetrical, from the nature of the decisions needed and their rapidity, to the individuals making the decisions and the level the decisions are made at. The information flows are contributory rather than independently decisive and the amount of information necessary is only relevant internally — the amount of information you need is related to your own decision-making requirements. The enemy has a different set of objectives, methods, and architectures, and his information needs are by necessity entirely different. The concept of information superiority or dominance is nonsensical when the decision-making requirements are potentially so different between opposing forces.
Boyd spoke about turning “inside” the adversary’s OODA loop, an asymmetry often misrepresented to require an emphasis on decision-making speed. In fighter aviation terms, turning inside an adversary is not necessarily a function of speed. All other things being equal, the faster aircraft often has a wider turn circle. However, all other things are rarely equal, and getting inside the opponent’s OODA loop is merely a metaphor for being able to act appropriately in response to often-chaotic conditions — otherwise known as being able to gain and maintain the initiative. The objective is to either exploit or create an asymmetry that favors your side. Getting inside the adversary’s decision loop has very little to do with the speed of your decisions and has everything to do with generating effects that the enemy has difficulty reacting to.
Decision Speed
Superior Decision Speed. Uncertainty and incomplete information are realities in warfare. Rather than demanding “perfect” intelligence, military forces must be able make accurate decisions at a rate that provides advantages over adversaries.
The Air Force has recently embraced “superior decision speed” as an alternative to information superiority, substituting one flawed construct with another. The Air Force Future Operating Concept (AFFOC) uses decision speed as an element of agility, but places too much emphasis on the hardware and not enough on the wetware. Like information superiority, it focuses inappropriately on the input to an OODA loop rather than the output from it. While the AFFOC explicitly calls out uncertainty as having a constant influence, it then dives into aspirational language that essentially does away with its effects. The paragraph that lays out superior decision speed’s definition mostly touts complex technology rather than capable individuals. Predictably, the emphasis on technology subordinates the human portion of the human-machine interface, treating the human as a cognitively handicapped machine in need of a few tech enhancements rather than as the critical element of the OODA loop that should be the beneficiary of training and education. The AFFOC lays out an improbably complete flow of information in a globally centralized command-and-control architecture:
Collected data will be integrated in an open, adaptive information construct unburdened by unnecessary classification barriers. Air, space, and cyberspace ISR assets will share information seamlessly and contribute to a Common Operating Picture (COP). A global COP will require advanced capabilities and various degrees of automation to unlock the User Defined Operating Picture (UDOP) will provide the interface between the decision-maker and the global COP. Human-machine interfaces will be engineered to deliver the right information and level of detail to the right person at the right time to make the right decision. This construct will balance speed with accuracy to deliver the ability to make risk-appropriate actionable decisions. Together, these elements will increase the speed and quality of decision-making to allow superior responsiveness.
There is an obvious question regarding superior decision speed: superior to what? A commander executing a preplanned operation may have very few decisions to make because key decisions were made long before the operation was undertaken. This kind of OODA loop can be extremely “tight” because the information required is limited and the necessary decisions likewise limited, but this loop is typically not very flexible. A less rigid OODA loop might allow for more flexibility at the risk of requiring significantly more from both commanders and the command structure. A centralized command structure could effectively disable the process because the links between the loop elements are slow to negotiate. In such a case, a decision could actually be lightning-quick, but if the links to either side (orient or act) are slow, the loop suffers. Putting aside the effects of enemy action, a poorly conceived or supported command structure can effectively disable an OODA loop.
Command Structure
Opposing OODA loops compete as wholes — the objective is to complete the entire loop and not to focus on one element. The completion of the OODA loop is dependent upon the command structure in which it is embedded. For a single human, the command structure is centered within the brain. As the OODA loop scales beyond a single individual, its function becomes more and more dependent on the ever-expanding communications architecture that stitches the bits together. Humans have developed to trust the information brought in by our senses, and the brain does a remarkable job of filtering erroneous or unclear sensory information. It is less successful at filtering out bad data that comes from sources beyond our immediate senses. As the command architecture becomes more removed from human senses, it also becomes more vulnerable to deception, interruption, and obscuration.
While paying lip service to the idea of decentralized execution, the Department of Defense’s language is often couched in terms of centralized control — the bane of a tight OODA loop — as if information must be consolidated centrally before data and direction are passed out as needed from a central something. In effect, many of the concepts postulate some kind of ubiquitous information source that passes out information to the people it determines need it. This leads to fanciful formulations like a “global COP” or the “adaptive information construct” and makes the underlying assumption that the greater network has better and more information than lesser (local) ones. It also reinforces the lack of trust in lower echelons that is characterized by our post-Desert Storm command culture, where decisions that should be taken at the tactical level are reserved to higher echelons. The over-centralized structure preferred in defense concepts reflects the reality that higher echelons lack the trust in decisions made at the tactical edge and in the information possessed by tactical elements. Any flattening of the command structure envisioned by COIN experience in Iraq and Afghanistan has already been reversed in the field, and new concepts tend to pay lip service only to the concept of decentralized control.
In most cases, the information to enable good (not perfect) decisions at a given command level is already resident at that level. For a flight of four fighters, the vast majority of the immediate information needed to execute the tasked mission comes from onboard sensors or other tactical elements in close proximity; higher command echelon involvement is critical for assigning the task but not executing the mission. The utility of a command echelon then drops rapidly with its distance from the fight. At the tactical level, there are more things that higher echelons can do to you than for you. Part of this is because higher echelons have no real-time mechanism to determine what information tactical elements need or have. In my own operational experience, higher command never provided information that was useful to solve a real-time tactical problem, but often provided instructions or imposed restrictions that prevented execution. This is not only a factor of observation but of orientation — higher-level command elements were effectively “disoriented” because they lacked both relevant information and the appropriate context for the information in hand.
Decision Failure
Instead of focusing on providing enough information so that higher echelons can turn the proverbial 5,000-mile screwdriver, we ought to be focusing on understanding and preventing decision failure. The fundamental assumption behind the idea of information superiority was that information was the key contributor to effective decisions, ignoring all of the other elements involved. A broader examination of cognitive processes would reveal that there are many sources for decision failure beyond a lack of information. Focusing on the decider rather than the information flows enters the realm of the “fuzzy sciences” with which technocrats are less comfortable. Expanding our force development efforts involves hard work and is much less likely to result in possession of a fancy new toy, perhaps making this strategy unattractive in an unabashedly technophile Department of Defense.
On the battlefield, information requirements drop as the decision space shrinks. An operator who has to determine the disposition of a squad needs much less information and has much more access to relevant data than the commander who has to direct the disposition of a brigade. The massive information flows envisioned in an information-centric concept are large because it is not enough to allow tactical elements to make decisions based on information they have on hand — we have embraced a paradigm where upper echelons have to have certain knowledge not only of the tactical decisions being made, but the reasons behind the decision.
The potential of “information clutter” is rarely addressed. In sensor terminology, individual signatures are often buried in the clutter (i.e., other things that have similar signatures but are not the target). For information flows, the tiny bits of good data are buried in an avalanche of incomplete or incorrect data and, if the enemy has a say, actively misleading data. Human-machine teaming is often put forth as the way around this problem, but machines which lack both intuition and experience are likely to be unhelpful to humans who rely on both. Machines may help categorize what is possible, which is a long way from determining what is correct. Warfare is not an optimization problem. We should avoid construction of a complex architecture that will paralyze our command and control when (not if) it fails to deliver; and we should concentrate on training individuals who can operate effectively in the absence of complete information. Combat operations are always a gamble and we need to rely on the gamblers, not the dice.
Wrapup
To meet these challenges, the Navy is aggressively pursuing a multi-faceted approach to warfighting which ensures our information superiority in future conflict.
Nothing will ensure information superiority, and it is foolish to assume that there is any approach that can guarantee it. At the end of the day, fog and friction are real factors on the battlefield, at any level of warfare. Building a massive web of information sources multiplies the possibilities for error, and the communications links required to stitch the network together introduce their own possible sources of failure. The Department of Defense should spend less time trying to wish away fog and friction and more time working out how it is going to function when beset by it. Our current paths put us on a collision course with reality, and we would be better prepared for combat if we plan for disorder rather than promising perfection.
Focusing on training an adequate human element is much more achievable and likely to be more effective than a misplaced faith in developing machines that will magically eliminate uncertainty from the input stream. Uncertainty is a fact of life in any endeavor, and a human-machine team optimized for combat will not only accept uncertainty but embrace it — for surely the enemy is beset by uncertainty as well. The focus of a successful command team should be centered on how to execute effectively while doing without, thereby avoiding paralysis when fog and friction play merry hell with both inputs and outputs associated with effective command and control. Our architecture should be centered on the human, and especially on training individuals who thrive on chaos rather than being paralyzed by it, placing faith in our people rather than our machines.
Col. Mike “Starbaby” Pietrucha was an instructor electronic warfare officer in the F-4G Wild Weasel and the F-15E, Strike Eagle, amassing 156 combat missions and taking part in 2.5 SAM kills over 10 combat deployments. As an irregular warfare operations officer, Col. Pietrucha has two additional combat deployments in the company of U.S. Army infantry, combat engineer, and military police units in Iraq and Afghanistan. The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. government.
Photo credit: Master Sgt. Jeffrey Allen, U.S. Air Force