What if the U.S. Military Neglects AI? AI Futures and U.S. Incapacity

kallenborn

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (part b.) which asks what might happen if the United States fails to develop robust AI capabilities that address national security issues.

 

The year is 2040 and the United States military has limited artificial intelligence (AI) capability. Enthusiasm about AI’s potential in the 2010s and 2020s translated into little lasting change. Domestic troubles forced a national focus on budget cuts, international isolation, and strengthening the union. Civil unrest during the 2032 elections worsened everything — factionalism and partisanship smashed through the walls of the Pentagon. Major initiatives floundered over costs and fear of aiding political opponents. A few smart, ambitious Department of Defense program managers pushed through some cutting-edge AI programs, but their scale was limited.

If the United States fails to develop robust AI for national security, the net effect on national security could plausibly range from devastating to net positive. The range of futures is enormous. AI and robotics could dominate the battlefield and threaten nuclear deterrence, plausibly resulting in the loss of American great power status. Or the AI hype could fizzle and U.S. national security could be actually more secure due to investments in other technologies.

The overall effect will depend on how key uncertainties resolve. First, can robotic systems powered by advanced AI generate sufficient advantage to prove decisive on the battlefield? Second, how significantly can information warfare counter AI advantages? Third, will advances in AI be more significant to national security than other emerging technologies? Fourth, will robotics and AI significantly threaten nuclear deterrence?

The remainder of the article explores these uncertainties and their implications for U.S. national security through three idealized worlds. In AI Explosion, advanced AI powers armies of autonomous systems, including massive drone swarms, and adversaries draw significant advantage from AI-enabled weapons, defense organizations, and logistics systems. For the purposes of this article, I define “autonomous systems” extremely broadly to include platforms and weapons with humans in, on, or off the loop. In AI Trinity, AI-based systems threaten nuclear deterrence, in addition to battlefield dominance. In the world of AI Fizzle, expansion of machine learning continues, but nothing follows. AI offers some battlefield advantages, but nothing transformative, in part due to information warfare countermeasures.

The wide range of AI futures means the United States should identify and adopt solutions useful across all worlds. In brief, Congress should require the Defense Department to conduct regular, unclassified assessments of national security AI to better understand how risks are evolving. The United States should prioritize information warfare capabilities, particularly cyber security, electronic attack, electronic defense, and new operational concepts to integrate them. Private and government national security organizations need to be more adaptable and diverse by promoting new types of leaders, placing less emphasis on cultural fit, and revising hiring practices that prioritize experience and insider knowledge over technical skills. U.S. war colleges and Strategic Command should start new initiatives — and expand existing ones — to assess the impact of AI and emerging technologies on nuclear deterrence and identify novel solutions to the problems AI creates.

 AI Explosion

 The year is 2040 and the 2034 Kargil war between India and Pakistan proved the efficacy of AI as a force of war. The war was fundamentally a contest of doctrine and adaptation. The victor used AI and autonomous systems to generate mass and speed throughout the conflict, throwing wave upon wave of self-organizing robots upon adversary forces. Their entire defense industrial base was structured to support the strategy — machine-learning techniques optimized every aspect of the acquisition system. AI systems helped coordinate deployments and logistics and optimize maintenance cycles. The loser subordinated AI to humans. The losing state also embraced autonomous weapon systems, but focused on human-machine teaming. The human element made the teams more flexible and they achieved some clever tactical victories, but the robotic masses proved too much. Off the battlefield, personal initiative drove ad hoc adoption of AI-enhanced processes.

Advances in computing power, data collection, and machine learning suggest a new era of AI prominence. Advanced unmanned systems, enhanced by sophisticated AI, allow militaries to replace expensive, multi-mission platforms with low-cost, single mission systems. Managing masses of autonomous systems will be a challenge, but improvements in AI will make that easier too. Some AI experts also believe hardware advances will keep AI progress steady in the near term.

If AI dominates the battlefield, a United States without robust AI capabilities would lose its conventional superiority. Although the United States would retain considerable capabilities in its existing ships, tanks, and aircraft, in a full-scale conflict, adversaries could overwhelm U.S. forces with masses of drones. Conventional weaknesses would exacerbate threats to U.S. alliance networks, as U.S. security guarantees would be weaker. Allied states already seek increased strategic autonomy. Nonetheless, the United States remains largely secure from existential harm due to its nuclear deterrent and asymmetric information warfare.

Threats of nuclear annihilation could still shield the United States against existential threats. The United States would retain nuclear parity with Russia and nuclear superiority to China. If even limited North Korean nuclear weapons can plausibly hold back the United States, American nuclear threats could plausibly hold back a new robotic superpower.

Unfortunately, reliance on nuclear deterrence in this scenario would encourage broader proliferation. The United States likely would be forced to abandon commitments under the Nuclear Nonproliferation Treaty to work towards denuclearization. Some states may develop nuclear weapons as the nonproliferation regime collapses.

Asymmetric information warfare would allow the United States to resist some adversary conventional threats. The United States could create and disseminate fake images or videos designed to manipulate adversary AI software. The United States could use cyber means to put such images in data collections used to train adversary AI algorithms, or subtly alter robotic control systems to induce mistakes or slow algorithms. Such sabotage would create weaknesses in any robotic system using them. Anti-space weapons may also disable or damage orbital or ad hoc satellite networks used to control adversary robots. Electronic warfare capabilities might be able to defeat some older robotic systems or send false signals to confuse or control adversary drones.

But in the world of AI Explosion, advances in autonomy will limit the harm from information-based attacks. New autonomous systems are less dependent on information — more autonomous platforms and weapons mean less need for external commands via satellite or electronic signal. Nonetheless, U.S. information capabilities might inflict sufficient harm to prevent adversaries from achieving some objectives.

The United States would also be likely to face new homeland security risks from non-state actors. Already, open-source information allows non-state actors to build crude robotic weapons. State sponsors and open-source resources may be sufficient to cause considerable harm. Novel forms of attack against chemical facilities, airports, and stadiums could cause mass casualties with simple drones.

AI Trinity

The year is 2040: AI and robotics threaten nuclear deterrence and dominate the battlefield. Swarms of drones guard national borders with a mixture of advanced air and missile defenses, while massive undersea swarms rove the sea in search of nuclear submarines. Cheap drone-mounted sensors virtually eliminated costly advantages in stealth, made the ocean vastly more transparent, and created significant uncertainty in submarines as reliable second-strike platforms. Other AI capabilities help manage the system, optimize processes to keep costs low, and reduce false positives and negatives. A series of short but bloody conflicts between the United States, China and Russia in the late 2030s raised specters of new great power conflict unconstrained by nuclear weapons.

AI could threaten the credibility of the U.S. nuclear deterrent. Although constant, real-time tracking of all nuclear submarines is difficult to imagine due to the massive size of the oceans, technology improvements and some luck could allow an adversary to know the locations of second-strike platforms for long enough to eliminate them in a first strike. Swarms of undersea drones and big data analysis offer great potential for new and improved anti-submarine platforms, weapons, and sensor networks. Already, some missile defenses use simple automation that could be improved with AI. Drones can also help track missiles, serve as platforms to defeat them, or simply collide with incoming missiles and aircraft. AI improvements generally enable more advanced robotic weapons, more sophisticated swarms, and better insights into data. Of course, the long history of failed attempts and huge costs of missile defense suggest elimination of nuclear deterrence is highly unlikely, but all of these developments could add up to serious risks to the reliability of nuclear deterrence.

In such a world, a United States without robust military AI capabilities is extremely insecure. The United States has neither conventional superiority nor a reliable nuclear deterrent, and must drastically rethink American grand strategy. U.S. extended deterrence guarantees would be far less effective and some states under the umbrella would likely seek their own nuclear weapons instead. South Korea and Saudi Arabia would likely become nuclear weapons states due to their established civilian nuclear programs, high relative wealth, and proximity to hostile powers in possession or recent pursuit of nuclear weapons. The United States could expand its nuclear arsenal to mitigate the harms of a less reliable deterrent, but that would require abandoning the New Strategic Arms Reduction Treaty and other arms control treaties. Ensuring national security would mean avoiding conflict or focusing on homeland defense — rather than a forward defense posture with forces stationed on the Eurasian landmass — to increase adversary costs. Diplomacy, soft power, and international institutions remain key to national security.

However, a soft-power strategy would be extremely challenging. The factors that could inhibit development of AI — domestic dysfunction, high debt, and international isolation — would cause considerable harm to U.S. soft power. American soft power is arguably already in decline and funding for the State Department and U.S. Agency for International Development have been cut considerably. Likewise, any abandonment of arms control treaties to support the nuclear arsenal would cause further damage. In short, in AI Trinity, a United States without AI is no longer a serious global power.

AI Fizzle

The year is 2040 and dreams of a robotic future remain a fantasy. During the early 2020s, implementation of machine learning and data analysis techniques expanded, creating some organizational and logistical efficiencies and reduced costs. But those changes were not transformative. Some states developed AI-powered autonomous platforms, but the battlefield impact was limited. A well-placed jammer or microwave weapon could defeat even large masses of autonomous systems.

The possibility of AI Fizzle has not been given enough serious consideration. Future AI may not handle battlefield complexities well enough to prove useful. True robotic dominance may require human levels of AI, which will likely take over 80 years or more given how little neuroscientists know about the human brain. Just autonomously distinguishing between non-combatant and combatants is unlikely in the near term. Advances in AI may also slow. During the 1980s, AI research entered a so-called “winter” in which research funding cuts, rejections of older AI systems, and market shifts resulted in a lull in breakthroughs and public interest. Particular AI techniques may also go through dark periods, as during the 1970s and 1990s when funding and innovations in neural networks dried up. Some already predict a coming AI winter.

In this world, the costs of U.S. limited development of AI are minimal and may be a net positive. Resources and leadership attention spent on encouraging AI may be directed to other, ultimately more impactful capabilities. For example, gaps in the suppression of enemy air defense mission could prove more consequential than AI in the short run. Challenges unrelated to technology, such as defense mobilization, may matter most.

Other emerging technologies, such as 3-D printing and nanotechnology, also may prove more transformative than AI. 3-D printing may revolutionize manufacturing and nanotechnologies may lead to extremely low-cost sensors, self-healing structures, and ultra-light materials. In this scenario, if the United States focuses on these technologies while adversaries focus on AI, the United States would gain first-mover advantages and a more robust capability.

Alternatively, no single emerging technology may prove transformative. Various emerging technologies may provide real, but not major benefits. If so, then the United States should find the right combination of technologies that best support security needs, and apply and integrate them into the defense establishment. A balance ought to be struck between emerging and established technologies — sometimes tried and true is best.

Where Do We Go From Here?

The future may look nothing like these worlds, or may be some combination thereof. Perhaps robotic weapons prove decisive in limited circumstances with only occasional strategic consequence. Perhaps AI challenges the sea-based leg of the nuclear deterrent with little risk to the land or air deterrent. The U.S. government has influence over which world manifests, but it should not be overstated. The government can move funds, adopt policies to shape innovation, and encourage public interest, but the private sector drives most current AI development. Uncontrollable, physical realities also impose requirements on the national security impacts of AI — is narrow AI sufficient for autonomous systems to dominate the battlefield, or is near-human AI necessary?

Despite such high uncertainty, the value in futures scenarios is to identify unspoken assumptions, identify strategies that will be useful regardless of what future manifests, and think through possible responses to each world. For example, the United States could:

Expand Foresight

The national security community needs to understand what world the future is heading towards. Congress through the National Defense Authorization Act should mandate regular, unclassified reports on the risks of military use of AI and relevant advances in China, Russia, and other leaders in this field. Congress should also mandate regular reports to assess the relative significance of different emerging technologies for national security, adversary pursuit of them, and how U.S. competitiveness could be improved in the most critical technology areas. The United States should establish a national intelligence officer for AI to prioritize and lead intelligence community assessments of AI and related technologies.

Prioritize Information Warfare

Information warfare — particularly electronic, cyber, and space warfare — is a natural counter and enabler of AI and autonomous systems. The United States should reinvigorate electronic warfare and cyber security capabilities that have struggled. In particular, the United States should enhance electronic attack and electronic protection, make cyber security a fourth pillar of acquisition decisions — alongside cost, schedule, and performance — and develop new concepts to integrate these capabilities for maxim effect. Studying China’s Strategic Support Force, which combines electronic, cyber, and space warfare, may also offer insights into novel doctrine and capability interdependencies in information warfare capabilities with AI. These insights might be worthy of inclusion into U.S. doctrine, training, exercise, and capability development. Adversaries could also threaten U.S. AI capabilities, so information warfare activities should inform U.S. acquisition, testing, deployment, and war-gaming related to AI.

Encourage Adaptability and Creativity

The huge range of potential AI futures mean the United States government should endeavor to create more adaptable and creative defense organizations. Technology may trend strongly to one future, but breakthroughs may cause a sudden shift to another. Even in a world in which the U.S. aggressively pursues AI, defense organizations require significant change to take advantage. Encouraging adaptability and creativity means flattening organizations and elevating leaders who encourage honest criticism, high competence, and collaboration. Diversity in all its dimensions should also be encouraged — diverse groups and intellectually diverse individuals tend to perform better in forecasting, analysis, and creativity. Doing so successfully may require fundamental structural changes in hiring practices that tend to emphasize insider knowledge and years of experience. Think tanks, consultants, and contractors should also look beyond cultural fit in hiring, which often translates to white, male veteran in the national security community writ large. Increased diversity requires new management approaches to minimize divisions, miscommunication, and compensation disparities.

Ensure Nuclear Survivability

The United States should evaluate and counter risks that AI creates to nuclear weapons survivability. AI may offer solutions to its own problems — better autonomy may allow more sophisticated decoys and improved delivery systems. More broadly, the U.S. government should increase grant funding for studies on the implications of AI for nuclear deterrence and fund nuclear-specific public writing contests akin to the War on the Rocks call for ideas that prompted this essay. U.S. war colleges and Strategic Command should create new, and increase funding to existing, initiatives to explore the impacts of AI and other emerging technologies on nuclear deterrence.

The future of AI is unknown. If the United States neglects AI, the implications for national security could range from devastating to net positive. The United States may need to abandon a forward defense posture in favor of homeland defense, because conventional has dwindled and new technologies threaten the credibility of nuclear deterrence. Conversely, U.S. security may turn out to be better off if adversaries prioritize overhyped AI, while the United States invests in more impactful capabilities. No matter how AI advances in the coming decades, research, investment, and policy decisions today will influence how secure the United States remains in any possible AI future.

Zachary Kallenborn is a freelance researcher and analyst, specializing in Chemical, Biological, Radiological, and Nuclear (CBRN) weapons, CBRN terrorism, drone swarms, and emerging technologies writ large. His research has appeared in the Nonproliferation Review, Studies in Conflict and Terrorism, Defense One, the Modern War Institute at West Point, and other outlets. His most recent study, “Swarming Destruction: Drone Swarms and CBRN Weapons examines the threats and opportunities of drone swarms for the full scope of CBRN weapons.

This article does not represent the views of the author’s current or former funders or employers.

Image: U.S. Army (Photo by David McNally)