The Future of Algorithmic Warfare Part IV: Promise Fulfilled
Editor’s Note: What follows is an excerpt from the authors’ new book, Information in War: Military Innovation, Battle Networks, and the Future of Artificial Intelligence.
What would an ideal case of artificial intelligence and machine learning (AI/ML) look like for the Department of Defense in 2040?
This question is the focus of Part IV of our Future of Algorithmic Warfare series. In contrast, the first three parts of the series explored less than ideal AI/ML developments occurring over the next 17 years, ranging from “Fragmented Development” to “Wild Goose Chases” to a worst case, “Stagnation” scenario, which would leave the Department roughly where it is today—and far behind the Chinese military.
We hope these fictional stories, all adapted from our new book, Information in War: Military Innovation, Battle Networks, and the Future of Artificial Intelligence, achieve two objectives. First, that they help reinforce for readers former U.S. Defense Secretary Jim Mattis’ frequent warnings that “the supremacy of American military is not preordained” and, similarly, that the U.S. military does not have a “preordained right to victory on the battlefield.” Simultaneously and most importantly, we hope that the stories inspire Americans working across the national security community to accelerate the detailed, essential work required to ensure that the U.S. military, come 2040, has a clear edge in the ongoing AI/ML development and integration battle with China.
The scenario below provides an optimistic outlook of what could occur if the Department of Defense realizes Secretary of Defense Lloyd Austin’s “AI-empowered” military vision. In this scenario, imagined AI/ML capabilities are successfully transferred from strategic guidance documents and speeches in the Pentagon’s outer “E-Ring to the tactical edge.” These capabilities subsequently influence how everyone in the Joint Force operates, from the Chairman of the Joint Chiefs of Staff to each soldier, sailor, airman, marine, and guardian.
* * *
The year is 2040. The Chairman of the Joint Chiefs of Staff finishes her cup of coffee and 3D printed turkey bacon as she gets inside the driverless car. She loved that the taste was calibrated to her palette based on an analysis of her food purchases over the last year, Spotify for food. The chief activates her neural link as she scans the hieroglyphic-like mix of images and text that now pass as daily news projected in front of her. Each shape or text blurb she touches triggers a story passed to her without words. Even though she has used the system for the last five years it still takes some getting used to. Some days she misses old-fashion reading.
The silence in her driverless car is near complete, even though the movements around her are occurring at almost 100 miles per hour. She looks out the window. Her augmented reality contact lenses indicate the temperature outside and suggest some music to match the season from her playlist. She looks back at the hieroglyphs waiting for her. The machines can anticipate her curiosity. As she looks, her neural link begins to transfer the data to her mind. It still felt like words even though all her ears actually heard was the old jazz piano ballad playing in the driverless car.
“China Looks to Brute Strength: Beijing Seeks to Counter America’s AI-Navy by Building a Larger Fleet Despite a Declining Work Force and High Inflation.”
“Technology Transfers are Military Strategy: Marine Commandant Urges Congressional Committee to Loosen Restrictions and Support Allies in the Indo-Pacific similar to Earlier Successes in NATO.”
“A Veteran’s Story: New AI-assisted Novel About PTSD Reaches Number One on the Book Charts.”
Her neural link stops its data transfer as she turns to look out the window. These headlines spark conflicting feelings in her – pride in accomplishments in the defense community, uneasiness borne of the knowledge that America’s military can’t rest on its laurels. She blinks three times activating the secure hieroglyphs. The windows in the car go black and static noise plays to create a layer of ambient sound. She opens a report by a Congressional sub-committee on the status of AI integration across the U.S. military and the neural link begins its dictation of the executive summary. The Chairman has read the report a dozen times, but she still takes it in slowly, searching for any insight she may have previously missed.
The report is unabashedly congratulatory and optimistic, much to her chagrin. In her opinion, back-patting is synonymous with lethargy and tunnel vision. But she has to admit, the report does a good job tempering the descriptions of expert testimony about AI integration over the past two decades by addressing the unique conditions under which success was found. American military capabilities weren’t enhanced simply as a matter of momentum. No, successful development and use of new systems were the result of key personnel with an eye on the horizon who demanded that AI answer strategic imperatives, rather than simply bulk up existing force postures and buying more preferred platforms because “it has AI now.” Despite all the advances, there were still conniving bureaucrats and budget battles. Sadly, no amount of optimized code could replace turf wars.
The report points to AI systems across the U.S. military as all tying into her predecessor’s push to reimagine and modernize the NATO mission in Europe. The war in Ukraine helped bring European military leaders to their senses. Endless intrigue and half-baked Russian efforts to use deep fakes and election hacking to undermine democracy pulled the politicians along. Historians were already calling the 2020s the post-truth decade. Russian interference in Western politics was, without a doubt, a core driver of the constant culture clash between far-right political factions and mainstream society that led to the troubles in France, Austria, Italy and Poland. But it wasn’t just political warfare. The decade was hell for some European countries – those in the Balkans, on the Baltic and bordering Russia. Cyber attacks on critical utilities timed against anti-Russian political events. “Hidden” green men infiltrating pro-Russian social movements and arming stateless dissidents in Estonia and Lithuania. Russia emerged from its loss in Ukraine more spiteful and eager to fight in the shadows while rebuilding its ground forces.
Something had to change. NATO was increasingly less and less effective, at least as far as the new generation, her generation, of military officers saw it. Not that the idea of collective defense was fundamentally bankrupt. But the posture of the organization was fundamentally wrong. Even in the 21st century, the alliance was better equipped to combat mechanized formations than formations of bots or drone swarms. The cyber element was getting stronger, but it wasn’t enough. Her predecessor’s call to use AI to produce a “gray zone” deterrent, to dissuade Russia from continuing its onslaught of interference and manipulation spoke directly to officers, scientists and engineers across the military. And the Special Committees set up to assess and advocate for new concepts and systems helped nuggets of gold float to the top. By 2035, Moscow’s approach changed. Russia’s “hidden” special forces personnel were tracked and outed by means of advanced facial recognition analytics. Western drone swarms fed immense volumes of data back to NATO combat commands that automatically deployed unmanned assets to intercept provocative Russian airspace intrusions, while informing alliance militaries about Moscow’s militia activities. AI systems built in partnership with Western veteran-led startups were increasingly combatting disinformation. Additionally, blockchain foundations enthusiastically supported NATO’s global information network of persistently trackable, automatically generated and attributably-verified assessments of information. Taken together, the Russian government was gradually forced, more often than not, to revert to the energy coercion and nuclear posturing of decades past but with diminishing returns. Advances in energy efficiency made Moscow’s energy coercion efforts far less effective. Further, the collapse of the ruble left Moscow’s military looking more like North Korea than a modern major power: big, old, and ugly.
The document matched insights from a report her science advisor gave over a year ago, pointing out the parallels between Britain’s experience with developing radar systems before World War II and the United States’s early AI adoption. In both cases, it wasn’t that there weren’t prevailing views on the nature of warfare that ran counter to the incorporation of new technologies. Even today, she reflected, many senior military leaders and garrison commanders believed that warfighting was a fundamentally human endeavor that could be aided by technology only up to a certain point. But the existence of such views – as well as of the structural realities they produced – was irrelevant if an ability to cultivate creative thinking and to link key personnel across the services could be institutionalized. Based on the report, she had organized a summer study at Newport led by the Office of Net Assessment. Its recommendations gelled with this view. Let strategic needs inform technology cultivation, but not at the expense of funding promising technology without clear application. Build a culture centered on the idea that prevailing assumptions about war are subject to change. And ensure avenues for personnel to build ideas and advocate for them beyond their service-specific context.
She turns to the War on the Rocks. A new story by a young officer reflects on the difference between his father’s military and the one he commissioned in only five years ago. At the heart of his narrative is the idea that he doesn’t see his job as permanently defined by one mode of operation or theater of war. She thinks back to Britain’s radar story, seeing the parallel to the bomber mindset that predominated through the mid-1930s. Britain’s battle network meant survival for the island nation, but British fighter pilots could not have known the role they would play as late as 1938. The truth was that critical technological pieces were only just then falling into place that turned the fighters and the coast watchers from the last war into the heroes of the next one. The parallel made the question of ongoing AI integration at least a little less about coincidence than the Congressional report did. Given the uncertainty of tomorrow’s wars and military needs, this was some comfort. Maybe – just maybe – if she learned from the lessons of networks of military professionals and technologists gone by, she could maintain and even improve America’s competitive edge.
* * *
The only future that sees military reforms that embrace AI/ML is one in which people champion new ways of thinking about war and entirely new organizational designs. Tacit knowledge and institutions more open to change than closed enable leaders to introduce and take advantage of new information technologies including AI/ML. Alternatively, closed systems struggle to generate new ideas.
When it works, military innovation aligns civilian leaders that have strategic vision with a mix of warriors and engineers that are eager to experiment. In Britain before the Second World War, the vision of individuals like Stanley Baldwin, H.E. Wimperis, Henry Tizard, Robert Watson-Watt, and – later – Winston Churchill was rooted in a common understanding of the deficiencies of the tools and strategies that characterized the status quo. The bomber deterrent — the belief that a nation’s bomber fleet alone could deter conflict because “the bomber will always get through” – was fundamentally flawed in the face of an adversary that was willing to enlarge military forces without reference to reasonable restrictions. And the proposed offsets of the day rang hollow, with technological game-changers like the fantastical “death ray” proving infeasible or entirely unready for the battlefield.
Looking forward to AI/ML, the outlines of what could become a consensus about algorithmic warfare are starting to form, even if unevenly, across the Department of Defense and joint force. The Department’s recently announced “Replicator” initiative is one such example. This, Deputy Secretary of Defense-led effort, is laser-focused on ensuring the Joint Force rapidly fields “all-domain, attritable” sensing and strike systems to help overcome China’s advantage in mass. Specifically, Deputy Secretary Kathleen Hicks has shared that Replicator’s goal is to field capabilities that “can help a determined defender stop a larger aggressor from achieving its objectives, put fewer people in the line of fire, and be made, fielded, and upgraded at the speed warfighters need without long maintenance tails.” And given the urgency to strengthen conventional deterrence vis-à-vis Chinese military developments, she’s demanding that the Department does so within the next 18 to 24 months. Achieving her vision, which will inevitably involve fielding these new capabilities years earlier from when they would have been delivered on more traditional Department acquisition timelines, will require multiple forms of AI/ML integrated across the joint force. Encouragingly, within weeks of the Replicator announcement, we’re already seeing tangible evidence, through innovations such as the Marine Corps’ highly autonomous and collaborative, low-cost tactical uncrewed aircraft system efforts , of the joint force aggressively acting on the guidance.
This effort, along with numerous others across all U.S. and many allied military services, provide reason for optimism. The Defense Department is starting to head in the right direction, and it is time to create new tactics and formations to take advantage of the moment.
Now, it’s up to all of us—the often forgotten yet most important component of any AI/ML discussion—the people inside the national security enterprise. We call on all warriors and engineers to roll up their sleeves and help the Department hone algorithmic warfare so the services can turn dreams about future battles into doctrine, training, and material solutions that back integrated deterrence and American military power.
It will be a long campaign. There are still legacy processes in the U.S. military and an endless horde of court eunuchs and mandarins across the bureaucracy that will fight back against real change. They will resist not out of spite, but because the bureaucracy has perversely promoted risk aversion and a status quo that favors large, slow programs with unrealistic budgets. They will slow roll and stall change because it is all they know: process over progress.
Yet, the moment calls for being bold and challenging assumptions about both what algorithmic warfare should look like and how best to adapt the defense modernization enterprise to accelerate change. It calls for those warriors and engineers to experiment, wargame, and red team as they search for the right way to make a more interoperable force that can leverage AI/ML to gain a, or many, positions of advantage. It calls for more skepticism than cynicism. If nine of ten officers agree about emerging concepts and capabilities, it should be the duty of the tenth to question that consensus. The more open the marketplace of ideas, the more resilient the AI-empowered concepts and capabilities will prove to be.
Col. Scott Cuomo, Ph.D., currently serves as a senior U.S. Marine Corps advisor within the Office of the Undersecretary of Defense for Policy. He helped co-author these essays while participating in the Commandant of the Marine Corps Strategist Program and also serving as the service’s representative on the National Security Commission on Artificial Intelligence.
Benjamin Jensen, Ph.D., is a professor of strategic studies at the School of Advanced Warfighting in the Marine Corps University and a senior fellow for future war, gaming, and strategy at the Center for Strategic and International Studies. He is also an officer in the U.S. Army Reserve.
Christopher Whyte, Ph.D., is an assistant professor of homeland security and emergency preparedness at Virginia Commonwealth University.
The views they express are their own and do not reflect any official government position.
Image: AI generated with Midjourney