The Future of Algorithmic Warfare Part II: Wild Goose Chases
Editor’s Note: What follows is an excerpt from the authors’ forthcoming book, Information in War: Military Innovation, Battle Networks, and the Future of Artificial Intelligence.
What happens if the current hype around artificial intelligence and machine learning (AI/ML) fails to create lasting change across the national security enterprise? From government blueprints and industry guidelines to proclamations about the future of war, most of the current discourse focuses on the inevitability of a technological revolution without considering the prospect of a bundle of uneven experiments bound to fail more often than they succeed.
Based on our recent book — Information in War: Military Innovation, Battle Networks, and the Future of Artificial Intelligence — we see different futures on the horizon that call for prudence and a more robust dialogue about how people, bureaucracy, and knowledge networks collide with any new technology.
In the first article in this series, we explored how AI/ML might fail to live up to expectations due to the iron cage of bureaucracy and the tendency for military organizations to resist structural change. This article considers an alternative, albeit less likely, path towards stagnation: what happens when the military bureaucracy changes but novel experiments with AI/ML fail to shift doctrine and prevailing ideas about warfare across the profession of arms. The result is a man on horseback trying to code elusive cavalry charges.
Like the first installment, the scenario below is based on historical cases studies in the book and presented as a slice-of-time scenario that follows the chairman of the Joint Chiefs of Staff on their morning commute in 2040. Each scenario explores how the interplay of people, bureaucracy, and prevailing ideas about warfare shapes the extent to which any new information technology catalyzes an episode of military innovation. The arc of each scenario follows insights from the book about how and why the military profession has struggled to integrate new information technologies to build battle networks over the last hundred years.
The historical context for the scenario below is the many starts and stops associated with French experiments with radar during the interwar period. We show that, much like the fate of Pierre David during that epoch, bold innovators can find their path blocked as much by the marketplace of ideas and received wisdom about war (tacit knowledge) as by the confines of bad bureaucracy. Old ideas can limit the potential of new technology. Even when resources flow freely and military professionals create new units to experiment with emerging capabilities, lasting change requires engaging the stories that old soldiers tell themselves about their profession.
* * *
The year is 2040. The chairman of the Joint Chiefs of Staff rides to the Pentagon in her unmanned car while listening to a roundup of the defense news. Her personal device, connected to the car, keeps reading despite the reality that she is only half listening. Even though it could read her heart rate, analyze her movement, and even assess her emotional state, the smart device never quite lived up to the promise of adjusting to the right context and changing its tone or the speed of the podcast to her mood. To the algorithm, a car was a car and a commuter’s prison that warranted a slow, steady reading pace.
Her mind was lost, caught between the past and the present. The chairman kept drifting from the daily news to a dissertation draft she promised to read for an old friend from her days as a lieutenant. He got out of the Army and became a military historian who always had more students than he had time to advise. The dissertation revisited the story of Pierre David and the mishaps of France’s military efforts to harness the power of radio detection during the interwar period. Several headlines interrupted her dreams of old death rays and failed experiments along the Franco-German border:
Funding in Question: Congressional Committee Members Question Marine Commandant Over Years-Long Infantry Machine Learning Initiative.
New Report: European Leaders Fear That Russia is Prepared to Fight “Futuristic” War Amidst Concerns About Stalled NATO Modernization.
Former Secretary of Defense Cites Brain Drain Crisis at Pentagon.
It had been 20 years since she made AI/ML part of her professional journey but still the services seemed no closer to a major breakthrough. Sure, there were experiments and new units, but they always seemed to end after billions of dollars and lots of hype with no enduring change. The experiments added hundreds of billions of dollars to the debt just to optimize killing a little more, incremental gains for monumental costs.
There was no imagination. Each program perfected the preferred tactics that animated the services. The Air Force still loved dog fights. The Army and Navy were lost in dreams of decisive battles on land and sea. The Marine Corps believed that everyone was a rifleman even though it had been a decade since anyone — even civilians — had killed with only iron sights. Everyone lived inside stories of past wars and stale fables about old ways to solve future problems. The weight of received wisdom crushed the promise of new technology.
The chairman asked her device to summarize recent hearings by a congressional sub-committee on the status of AI integration across the U.S. military. Bottom line, their expert testimonies suggested, was the existence of a generational divide among the services. There was an enduring appetite for AI/ML usage, but the junior officers struggled against old ways of war. Senior leaders, commanders, and civilian directors perceived a need for machine learning and novel systems but tended to follow the path of “old wine in a new bottle.” They exceled at getting Congress to fund experimental units that used new algorithms to fight old wars. Every new unit was a promotion for a young officer the old guard had mentored, even if experiments failed to produce results. Costly pageantry masqueraded as progress.
She remembered hearing about a marine effort to have an AI agent compete with an old sergeant to recognize tactical problems and solve them. Even though the machine proved faster, the report by the commanding officer of the experimental unit suggested that the machine couldn’t see fighting spirit or promote warrior ethos. That was probably true, but it was only partially relevant. Her own calibrated smart device could tell her emotional state but struggled to know when to shut up unless prompted. It still saved her hours a day and allowed her to navigate large volumes of information.
Old ideas about war lived like ghosts in the machine, forever appearing as bugs distorting efficient processes. Despite the vast sums and willingness across the services to experiment, change was slow and uneven. This rhythm reinforced a worldview cultivated in the profession of arms that there is more continuity than change in war. The character of war rarely shifted in a significant fashion, at least in spans of time measuring only a few decades, and even when it did the disruptions didn’t replace the insights of great commanders. She read more articles about how to train algorithms to think like Napoleon than she did proposing a fundamental rethinking of operational art.
The profession loved technology, but it was more a quest to perfect the art of remote death than real change. Her fellow officers believed that increased precision and lethality would always be beneficial, particularly as America’s adversaries modernized and invested in new technology. Even North Korea could hit a moving tank over 1,000 km away. Global precision strike — the recon-strike complex — pulled the profession regardless of national boundaries to search for new scientific concepts, set up research centers, and provide seed money to hot-shot technical specialists but all in the name of protecting existing missions. The Navy still dreamed of fleet battle and “crossing the T” — albeit with partially manned surface combatants led into battle by satellites more than antique spyglasses. The Army called for deep strikes using hypersonic missiles as if they were artillery tubes of old and discussed how to integrate this with maneuver formations flying halfway round the world to seize an airfield. Space Force remained lost in airpower theorists and trying to explain the expanse with references to Douhet, Trenchard, Mitchell, and Warden.
All the services were locked in modernization initiatives, but in the chairman’s time in the job, no initiative had turned into a major reconceptualization of how to fight and win wars. New equipment locked in old ways of war served as a false promise. The service chiefs and senior civilians confused more investment with better investments. The Department of Defense was a case study in diminishing marginal returns.
The congressional summary reminded the chief why she was entertaining calls by administration to ask the current commandant of the Marine Corps to step down, or even more radical whispers to propose altering U.S. legal code to reform the services. The step seemed drastic, but his failure to achieve fundamental goals set by the chairman herself and linked to a mandate given to her by the secretary of defense nearly three years prior might warrant calling for the resignation. Politics always felt dirty even when it was the price of progress.
Still the old general made it hard. He was decorated, charismatic, and unimaginative. He held fast to the mantra that every marine was a rifleman even though there were now more mechanics and coders than grunts. He wasn’t wrong. War was a human endeavor. But if a Spartan could have imagined coding drone swarms to penetrate rival hoplite formations, they would have traded their shields for tablets. The addiction to an image of the honorific warrior, more muscle than mind, was perpetuated by grand old warriors like this. Of course, his machine-learning campaign failed. He wanted to use the best decision-support algorithms on the market to justify his narrow historical reading of maneuver warfare and apply it to squad-level infantry attacks. No amount of money could save a bad idea from its eventual collapse.
The old Marine general wasn’t alone. The other services were just as stuck. She bore the brunt of their wild goose chases. When she took over as chairman, the secretary of defense gave her a simple charter: Make it happen. Congress was sick of the waste, the stagnation, the promise of revolution unfulfilled.
The chairman was younger than most of the generals who still ran the major combat commands and populated the upper echelons of the services. She was also unlike many of the men and women that tended to rise to the top at the Pentagon. With an engineering and analytics background, her early career was marked by involvement in one data science-linked posting after another. Work with the Chief Digital and Artificial Intelligence Office in the 2020s pushed her ahead of her peers, having streamlined a system to predict the spread of pandemic disease and jump-started projects to model future food and water insecurities. She jumped ahead twice more as she brought new perspective to supply chain challenges faced amidst the unprecedented budget debates of the early 2030s.
Now she found herself sitting uneasily between the services and the secretary of defense. The old Marine general was a symptom of a larger disease: making every new technology and experimental unit conform to old ideas about warfare. At least he tried. He promoted multiple officers who led experimental formations. He talked about the promise of new technology but dreamt of battles like Tarawa and Guadalcanal.
* * *
As the scenario implies, there are reasons to be concerned that the current wave of enthusiasm around AI/ML will collide with legacy ideas about warfare to stall progress. New technology cannot overcome old ways of war. Even when the bureaucracy is nimble and allows the man on horseback to develop new formations and experiments to test the promise of algorithmic warfare, change still requires new stories about warfighting. These stories, theories of victory and doctrinal concepts, should resonate with — or change — the prevailing views services have of themselves.
The historical precedent to the future scenario above is interwar France. Unlike experiments with radar in the United Kingdom, France struggled to pick a single development path and integrate the new information technology with prevailing warfighting concepts. French leaders struggled to scale their early experiments into something akin to the British Chain Home system that linked radar to air defense. The result was a series of wild goose chases that spread resources and led to a brain drain to the private sector. The military profession would be wise to study these stories of stalled progress and even failure with the same fervor and focus that it applies to military innovation success stories. Studying only the winners leaves the profession blind to the cautionary tales that arise from the more numerous failed gambits.
The historical case and scenario suggest a need to open up the marketplace of ideas to catalyze change. Too often the military debates with the military absent a larger network of civilian scientists, officials, and concerned citizens. War on the Rocks has fought a rear-guard action against this tendency to confine discourse to increasingly narrow podcasts and forums moderated by military professionals for military professionals. This narrowness is partially a function of the flood of information and niche outlets. Yet, more information doesn’t mean more diverse data points and perspectives. It can just as easily mean recycling the same ideas with minor variations or slinging spite and gossip in lieu of thoughtful reflection and critique.
The only way to escape the gravity of old ideas is to inject new thinking. The more the military profession opens itself up to debates that reach far beyond its ranks to include civilian scientists, academics, partners, allies, and concerned citizens, the more likely it is to reconsider old ways of war. Look no further than the never-ending barrage of articles about force design in the U.S. Marine Corps. It is a sharp internal debate about the efficacy of current reforms mostly between retirees and current leaders. While neither side has a monopoly on truth, both would be wise to bring in different services and outside civilian perspectives to imagine a wider range of scenarios.
Will the current interest in AI/ML across the Department of Defense and services end in wild goose chases? It is too soon to say. Experiment programs that link AI/ML to combined joint all-domain command-and-control efforts show promise. There also appears to be money on the table to scale these efforts across the services, alongside insights from the war in Ukraine that illustrate how adapt tactics to take advantage of AI/ML. What hasn’t been as forthcoming are entirely new concepts and doctrine across the services. AI/ML can mean more than just perfecting the promises of effects-based operations to hit the right target faster. True change will emerge with new concepts that force rewriting core treatises like Field Manual 3-0 Operations for the U.S. Army and Marine Corps Doctrinal Publication 1 Warfighting for the U.S. Marine Corps.
Benjamin Jensen, Ph.D., is a professor of strategic studies at the School of Advanced Warfighting in the Marine Corps University and a senior fellow for future war, gaming, and strategy at the Center for Strategic and International Studies. He is also an officer in the U.S. Army Reserve.
Christopher Whyte, Ph.D., is an assistant professor of homeland security and emergency preparedness at Virginia Commonwealth University.
Col. Scott Cuomo, Ph.D., currently serves as a senior U.S. Marine Corps advisor within the Office of the Undersecretary of Defense for Policy. He helped co-author these essays while participating in the Commandant of the Marine Corps Strategist Program and also serving as the service’s representative on the National Security Commission on Artificial Intelligence.
The views they express are their own and do not reflect any official government position.
Image: AI Generated art by Dr. Benjamin Jensen