Seeing, Knowing, and Deciding: The Technological Command Dream That Never Dies?

Reynolds banner

Could Gen. William Westmoreland see the future? In a 1970 issue of Army Aviation Digest, Westmoreland, then chief of staff of the U.S. Army, offered his view on the future of command decision-making. In the print version of a speech he had delivered the year before, he predicted the practice of command would take place on a highly surveilled, automated, and interconnected battlefield which, if such a system were realized, could “assist the tactical commander in making sound and timely decisions” as well as permitting “commanders to be continually aware of the entire battlefield panorama.” Although Westmoreland was certainly not a clairvoyant, in discussions of contemporary dreams of command his language would not appear out of place. This is especially the case surrounding efforts to implement Joint All Domain Command and Control, which, in broad terms, is the U.S. military’s effort to link decision-makers across land, sea, air, space, and cyber using advanced communications and data analysis technologies.

The release of the Department of Defense’s Joint All Domain Command and Control strategy in early 2022 offered some additional information regarding the goals of the department’s supposed reimagining of the technological elements supporting command in the U.S. military. The document is couched in the rhetoric of significant, technologically inspired changes that will allow the military to analyze and process data from a range of sources as well as decide faster, based on advanced analysis capabilities. According to the declassified summary, the strategy is structured around three pillars: “sense,” “make sense,” and “act.” The strategy’s drafters envision these core components being enabled by the information and decision-support capabilities of artificial intelligence, machine learning, and advanced sensor systems, of which “information and decision advantage” at the “speed of relevance” is intended to be the result. Westmoreland’s “sound and timely” decisions, as well as his notion of a commander being aware of the “entire battlefield panorama” would fit well into this schema. So too would Adm. William Owens’ desire to “lift the fog of war” through the integration of new information and communication technologies 30 years later.

 

 

As articulated in the strategy, such changes are proposed as a response to novel threats. Yet, as both Westmoreland and Owens’s perspectives attest, a review of past imaginations of command-related technologies in the U.S. military makes one thing clear: The vision for a contemporary advanced command system is yet another instantiation of a longstanding dream related to the orchestration of military decisions, rather than something entirely original. In such visions, the U.S. military risks what B.A. Friedman and Olivia A. Garard suggest is the tendency to conflate “technological capacity with command.” Thus, in light of the recent release of the strategy, it is worth exploring the rhetoric that sustains these reoccurring visions of technologically enabled command. Furthermore, we should consider the practical implications of technological roadblocks facing current efforts as well as possible perils related to its promises of enabling faster and better decisions.

Echoes from the Past

While such visions have roots that trace at least to missile defense programs like the Semi-Automatic Ground Environment in the 1950s — as well as early Pentagon-supported computing research initiatives for purposes of command and control during the 1960s — notions of command systems approximating the current intentions begin to coalesce more firmly during the latter decades of the Cold War. We can touch on a few examples that resonate with current efforts.

Driven by military competition between the Soviet Union and the United States, as well as fears over the implication of Japanese fifth-generation computing, the Defense Advanced Research Projects Agency undertook a decade-long, billion-dollar effort known as the Strategic Computing Initiative. Starting in the early 1980s, it featured a multi-pronged effort to explore the implications of AI for military purposes — a key element of which was command and control. This came particularly in the form of a proposed battle management system for the U.S. Navy. Like current efforts, Strategic Computing’s contribution to command-related research talked about AI-enabled expert systems helping commanders quickly sense and decide what to do. The program’s founding document noted the goal of developing human-like, “intelligent capabilities for planning and reasoning,” the desire to augment human judgment with AI-enabled expert systems, and the need to help make sense of “unpredictable” military situations in a rapid fashion. The plan was to create a decision-support system that went beyond existing options during the era. The principal option at the time was the Worldwide Military Command and Control System, itself predicated on “accurate and timely decisions”, by leveraging intelligent computing that could assist in planning, formulating decision options, and managing uncertainty in quickly changing combat environments. In the end, as Emma Salisbury points out, the program had mixed results and generally failed to live up to its big AI-related promises due to technological roadblocks as well as funding shortfalls.

Yet almost as soon as Strategic Computing fizzled out, similar high-tech approaches to command systems popped up again. In the late 1990s, the U.S. Army’s Future Combat System proposed a modernized command structure linking manned and unmanned systems over wireless networks. The initiative emerged out of assumptions about war associated with the Revolution in Military Affairs, in which defense officials believed smaller, faster, forces linked by advanced communications networks would prove decisive in future conflicts. While the modernization effort also intended to replace some military hardware such as the M1 Abrams tank, it was the command network that bonded the program together. The desire for speed and the notion that information dominance might prove critical in future conflicts drove the Future Combat System. In terms of command, as laid out in a 2007 Congressional Research Service report, this manifested in technological artifacts such as ‘Battle Command Software’, the desire for automated mission planning capabilities for rapid response, as well as the intent to improve ”situation understanding” through the use of maps and databases that could track enemy locations. In fact, during congressional budgetary hearings, defense officials cited the supposed advantages incurred by soldiers during testing, principally stemming from “increased soldier awareness and battlefield understanding.”

Similar to Strategic Computing, the Future Combat System endeavored to help commanders assess, make sense of information, and act quicker than was previously possible. However, also akin to Strategic Computing, the program struggled to live up to its promises. A 2012 RAND assessment of the program largely considered the project a failure due, at least in part, to over-aggressive timelines and shifting goal posts. While RAND’s report on the Future Combat System only took on aspects of command and control in a limited fashion, it did point to performance issues in the command system’s ability to complete tasks such as automated data fusion to generate a common operational picture. The report noted that such issues degraded “a key operational linchpin” of the program.

Although the Future Combat System and ongoing plans to develop a high-tech command system are not precisely the same, the parallels between the initiatives are at least close enough for defense officials to contend that current efforts do not repeat past mistakes. Even amongst such claims, there are notable similarities between the Future Combat System’s intentions to help the Army “see first, understand first, act first” and contemporary efforts to “sense, make sense, and act” at a quicker pace.

Apparently, dreams of technologically enabled sensemaking and fast decisions die hard. Ongoing justifications of new technologically advanced command systems commonly rely on rhetoric that coheres with the desires of its predecessors in both Strategic Computing and the Future Combat System. For example, members of the Joint All Domain Command and Control development team have argued that conflict in the future will feature decision timelines that are reduced to “milliseconds” necessitating the integration of advancements in computing, AI, and machine learning as the “linchpin” of command requirements. As the strategy argues, such technological changes are necessary for processing and delivering information at the speeds required in modern conflict.

Furthermore, with respect to AI and the technological elements supporting military decision-making, similar rhetoric is deployed outside of discussions specific to current command visions. The 2021 National Security Commission’s Report on Artificial Intelligence, co-chaired by former Deputy Secretary of Defense Robert Work, asserts that, in military contexts, AI will help to “find needles in haystacks” and “enhance all-domain awareness” leading to “tighter and more informed decision cycles.” Similarly, the Defense Advanced Research Projects Agency’s notion of Mosaic Warfare is itself a metaphor for the intent to dynamically link weapons, sensors, and decision-makers. Such arguments appear all around us and reflect the degree of technological optimism surrounding AI-enabled systems. Interestingly, while these reoccurring desires for technologically enabled command systems are framed in terms of “rapid changes” to the security ecosystems and “significant new challenges” facing the United States, what seems to be happening is that the Department of Defense has turned again to its decades-long aspiration. For justification, it leans on rhetorical assertions and problematic assumptions that also have decades-long histories. And while strategic and political factors have changed in the time spanning from the Cold War to contemporary security challenges, similar tech-optimistic framings persist.

Inertia and Implications

Today, Joint All Domain Command and Control certainly has a degree of institutional momentum. As Gen. John Murray (now retired) stated in a 2020 congressional hearing, “nobody is arguing with the concept.” Furthermore, as recently as this year, Gen. Mark Milley proclaimed the “irreversible momentum” toward the program’s implementation. Whether that momentum pushes in a coherent direction or not is unclear, even considering the release of the recent strategy. What should be noted, however, is that in the days since initiatives such as Strategic Computing and the Future Combat System, AI/machine learning, the hardware and data availability needed to train algorithms, and other digital technologies have become more advanced and capable. Nonetheless, AI/machine learning is still afflicted by serious problems such as bias, issues with training data, trust in machine-human interaction, and difficulties related to explainability, or the desire to know why an AI system came to the decisions it did, though the latter is an issue the Department of Defense is working on. As an example, as suggested in a 2022 report by Stanford’s Human-Centered Artificial Intelligence, even in pristine research environments of universities and private sector companies, where datasets can be labeled and curated and algorithms trained and tested, there are still problems related to language models reproducing the bias in their training datasets at increasingly high rates. Thus, with respect to military AI, the problem is that even in ideal settings machine learning models can make unexpected and undesirable errors.

Furthermore, many models are trained with performance on benchmarking datasets as the ultimate goal, not their functionality in the real world. As machine learning researchers have suggested, this is because models are frequently “taught to the test” of the benchmark data, leading to outputs that can’t be replicated outside of controlled research environments. Therefore, major questions persist when it comes to AI/machine learning’s place in military command processes. For example, what data will the elements of command systems relying on the prediction and decision support capabilities of AI be trained on? Additionally, how certain should officials be of system performance in the complexity of war, particularly if AI-enabled command systems are offering up recommendations or possible courses of action? Avi Goldfarb and Jon Lindsay have recently pointed to the issues with data in military environments. There is a major risk to not solving such problems, particularly in the case of AI-enabled command decisions.

That said, time horizons matter. Current efforts surrounding Joint All Domain Command and Control are mostly focused on initiatives such as cloud capabilities and improving interoperability and data sharing across the services and other partners. The implementation plan remains classified, so efforts related to using AI for prediction or planning are murky. Still, it is worth assessing what the military may turn to next in the context of the recent public strategy document. Particularly if AI-enabled systems are envisioned as providing, as the strategy suggests, the “technical means to perceive, understand, and predict the actions and intentions of adversaries, and take action.”

Apart from concerns regarding technological functionality, we should also assess what the dominant assumptions contributing to current efforts might mean in practical terms. As addressed above, the longing to act faster and reduce confusion on the battlefield are old desires in military thought, ones that are frequently linked to the supposed capabilities of advanced computational systems. The implication of these long-term problems — if they are considered solvable through command-related technologies — is worrying. This is particularly the case if such technologies are envisioned as the path to achieving a rapid, decisive victory.

Scholars have documented the perils of assuming, and pursuing fast, decisive war. In fact, Dave Johnson recently explored this issue in War on the Rocks, in which he touches on and critiques the “belief that future wars will be short, decisive affairs.” Furthermore, Antoine Bousquet argues that science and technology have been repeatedly turned to by militaries seeking such decisiveness. While it is important to not conflate general strategic considerations and tactical decisions on the battlefield, the decisions commanders make should support overall political objectives. And as Paul Brister suggests in a recent Brookings edited volume, there are tough lessons to be learned from assuming technologically enabled tactical speed will lead to short or easily won wars. In the same volume, Nina Kollars argues a parallel point suggesting, “the lure of faster war leading to faster victory is not only questionable but also a persistent techno-pathological obsession.” Thus, speedy, AI-enabled decisions as a means to increasing tactical speed should not be seen as undeniably beneficial, particularly in the face of current discussions of so-called “hyperwar.”

Notably, Joint All Domain Command and Control by itself is not a theory of victory or a warfighting concept. And it does not advocate a speedy end to war as such. Rather, its proponents envision it as an enabler of other warfighting functions through establishing, at least in its initial phases, a more interoperable set of information systems. The proposed result of such endeavors is, as the strategy contests, to “directly and dramatically improve a commander’s ability to gain and maintain information and decision advantage.” That said, its proponents risk forwarding advanced command systems as the enablers of rapid — decisive — victory, thus overly buying into technological optimism as a solution to achieving political or strategic goals. As the vision for a new command system continues to emerge, we would be better served by being wary of any “techno-pathological obsession.”

Furthermore, as opposed to some conclusions from the National Security Commission on AI, it is not entirely apparent that AI-enabled systems will be successful in clarifying very much in the conduct of warfare. As Sam Tangredi notes, AI systems still struggle with problems that are not well-structured, and war is anything but a well-structured problem. Moreover, deep-learning AI models have proven to be relatively easy to trick through, for example, altering pixels in images fed to the algorithm. Significant elements of war are related to diversionary tactics and stratagems meant to confuse and mislead adversaries. This combination of a technical problem and the common practice of deception in war may lead to further confusion rather than clarity for those operating within any fully realized version of an AI-enabled command system.

Conclusion

Accordingly, we should hesitate to conclude that new command-related technologies will suddenly illuminate the battlefield for commanders leading to rapidly executed better decisions, or necessarily lead to anticipated positive military outcomes. Further, we should be skeptical that such desires are all that new, and, as Martin Van Creveld’s work on command suggests, they will remain very hard to perfect. Justifications for developing a new, more advanced, command system echo past projects such as the Strategic Computing Initiative and the Future Combat System, both of which were generally unsuccessful in hitting their ambitious marks. It is important to at least consider the rhetoric, and the outcomes, of these past projects in debates over the merits and possibilities of future technologically advanced command systems.

Furthermore, the rhetoric surrounding the development of new high-tech command systems is potentially risky if it is substantively linked to assumptions about fast-acting, technologically decisive war. There are historical cases which demonstrate the consequences of instantiating similar assumptions into military practice. Azar Gat’s work on the history of military thought leading up to WWII, while too complex to delve into detail here, demonstrates that myth-like ideas about technological artifacts such as motor vehicles and airplanes shaped how theories of fast, mechanized war were put into use. Thus, a reflective consideration of the historical similarities related to linking fast decisions and enhanced situational awareness with advanced computation or AI-enabled systems, as well as foregrounding the very real technological hurdles facing the integration of AI-related technologies into decision processes, will help to avert the worst outcomes.

 

 

Ian Reynolds is a Ph.D. candidate in International Relations at the American University School of International Service studying the history and cultural politics of military artificial intelligence. He is also a doctoral research fellow at the Internet Governance Lab and a research associate at the Center for Security Innovation and New Technology, both housed at American University. During the 2022/23 academic year, Ian will be a pre-doctoral fellow at Stanford’s Center for International Security and Cooperation as well as the Institute for Human Centered Artificial Intelligence. 

Image: U.S. Army