The Hypocrisy of the Techno-Moralists in the Coming Age of Autonomy

hoffmanarchimedes

Societies make war the way that they make wealth, as the Tofflers noted years ago in their path-breaking book, The Third Wave. For this reason, national security analysts have been exploring the potential implications of the convergence of unmanned systems and artificial intelligence (AI). Our future social and economic lives will be increasingly driven by robotics and the algorithms at the core of AI. It is already with us in our homes and cars and will eventually generate profound changes in the scale and speed of how warfare is conducted. This new era in the conduct of warfare could be called the “7th Military Revolution,” building upon the previous information technology-driven revolution.

The utility of autonomous systems in the military covers a wide range of missions and tasks. AI “will have a transformative effect on the strategy of those states employing them.” This is because, as noted by Kenneth Payne of King’s College London, “militaries that can successfully develop and utilize them will experience a dramatic increase in fighting power relative to those that cannot.” AI and autonomy can improve decision-making at the operational level of war, speed up the fusion of intelligence from mountains of data, and vastly accelerate the defense of our cyber and space system networks. As noted in recent scholarship, the security implications could alter the balance of power. The technology will help efficiently plan logistics and transportation plans and perform numerous medical analysis tasks. Some scholars think that AI and machine learning will even revolutionize the role of human input into strategy.

The Dissent of the Minority                                             

However, there is a movement amongst the AI commercial world to block the U.S. Defense Department from preparing for this coming era. In these virtual pages, Rachel Olney captured the economic difficulties faced by high-tech startups, but there is a morality play ongoing amongst the dominant “Big tech” players who control much of the market and the human talent. Thus, efforts to build a bridge between the Pentagon and the tech community need to focus on the major developers and not just nurture the small firms where tomorrow’s breakthroughs may occur. The Big Tech community has within it a minority group of “techno-moralists” that could appreciably harm U.S. security interests.

In 2017, a small number of Google’s employees, representing about five percent of their workforce, signed a letter which stated that “Google should not be in the business of war” and wanted to extract a promise from the company’s leadership and stockholders that they would never develop “warfare technology.” They urged the company to withdraw from its contract with the Pentagon on its Project Maven — an intelligence support contract to train algorithms to absorb data from surveillance videos. Google employees argued they did not agree to work in the “business of war,” and their company withdrew from the Pentagon contract. Last year, Google’s leadership published its code of ethics to frame how Google would apply AI. The code states that Google will not develop AI or other technology for use in weapons, however it did not rule out work with national governments and the military in other applications, including for China.

This movement has grown beyond one company and now infects leading-edge companies like Microsoft. Its employees recently posted a letter that exclaims: “We are alarmed that Microsoft is working to provide weapons technology to the U.S. Military, helping one country’s government ‘increase lethality’ using tools we built.” Notice the reference to “one country’s government.” That would be the United States, where Microsoft was founded and is presently headquartered. Addressed to the company’s CEO Satya Nadella and President Brad Smith, the letter goes on to add: “We did not sign up to develop weapons, and we demand a say in how our work is used.” Employees in high-tech have some clout given their unique skills and the relative input that their intellectual capital contributes to major innovation.

Smith, speaking at the Reagan Defense Forum last December, averred in very clear terms that Microsoft would “make available all its technology to the military, full stop.” Jeff Bezos, CEO of Amazon, echoed Smith at an industry summit: “If big tech companies are going to turn their back on the U.S. Department of Defense, this country is going to be in trouble.” Nadella quickly responded to the evident moral hazard represented in the “Workers 4 Good” letter, stating “We made a principled decision that we’re not going to withhold technology from institutions that we have elected in democracies to protect the freedoms we enjoy.”

These leaders seem to better appreciate the larger context of the debate. The techno-moralist minority comes off as either naïve or hypocritical. Somehow this community feels it’s on the moral high ground if it puts more Americans in danger or restricts the Defense Department from developing capabilities that could enhance U.S. weapons systems by making them more accurate and better at defending the country and its allies.

Ironically, the civilian population actually puts more trust in the military over other public and private institutions. Pew Center polls have found that – by a 80 to 45 percent margin – the public trusts the U.S. military to do the right thing with AI over business leaders.

Competitors

As noted by scholars like Elsa Kania, the Chinese and Russian governments have articulated that they place a high priority on the development of AI. Thus, it behooves the U.S. security community to examine the potential benefits and applications of AI and autonomous systems. Already, the Russians have operated robotic ground systems in combat. Readers should not take much solace in reports that the Russian Uran robotic tank underperformed in its real-world tests. Instead, the fact that Moscow is gaining valuable combat experience and clear lessons should be of concern. The Chinese are pursuing autonomous and lethal surface boats. China is recruiting some of their best young students to this field, with more aggressive lethal applications than the United States is working on.

AI plays a significant role in Beijing’s conception of future warfare. The goal of the People’s Liberation Army (PLA) is the development of advanced high-tech weapons that can help China to leapfrog over the current advantages that the United States presently holds. That edge is eroding quickly according to the U.S. intelligence community’s most recent worldwide threat assessment, an operational study by RAND analysts, as well as by the Pentagon in its latest defense strategy, which I played a role in writing. The use of AI will enable the Chinese military to increase their warning times, their ability to project power across domains, including from space and cyberspace, as well as via advanced autonomous systems like unmanned aerial vehicles or unmanned undersea vehicles. Importantly, the PLA aims to use AI to support “system-of-systems warfare.” Rather than seek physical destruction in the air, sea, and ground domains, Chinese theorists have oriented on gaining cognitive dominance — what one pair of scholars termed “brain supremacy.” Once they can obtain this competitive edge over adversaries in the cognitive space, the PLA assumes that superiority in the other domains naturally follows.

Some believe America is already losing the AI race and that we face a Sputnik-like moment. Kai-fu Lee, former head of Google Beijing, thinks so. His book — AI Superpowers: China, Silicon-Valley and the New World Order — is almost persuasive on this point. But I do not believe that China has the capacity to take the lead in this field over the long-term, even in commercial applications. Its centralized top-down-driven approach, state-owned enterprises, and rampant absorption of Western intellectual property are not likely to surpass the risk/reward incentives of the West. But it will make measurable progress in this field, in both business and military applications. It will be a superpower and innovator, not merely an imitator.

The PLA seems bent on exploiting these technologies and may also benefit from cooperation from Russia. Thus, there is another side of the moral argument that escapes the techno-moralists in Silicon Valley. If the Defense Department was denied access to AI-enabled defense systems, the likely result is protracted conflicts, say in Syria or the Middle East for example, with more human suffering — certainly from combatants and possibly non-combatants as well. Another possible result would be longer wars where the U.S. military operates at a disadvantage against less moralistic foes. The United States could have to offset its technological deficiency with a costlier conventional defense establishment (more active troops and manned hardware) representing a larger drain on the economy. The techno-moralists need to factor this into their understanding of what the costs to society may be.

Defense Policy and Response

The Pentagon thinks it has been sensitive to the ethical dimensions of artificial intelligence and autonomous weapons. In 2012,  then-Deputy Secretary of Defense Ash Carter issued clear guidance in a directive titled “Autonomy in Weapons Systems.” This directive frames U.S. military developments and requires a human to decide when to use lethal force. The official policy requires that the development of any new systems must “allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” and that such judgment should be in accordance with the laws of war. The two exceptions to this directive are autonomous cyber weapons and autonomous systems that target other autonomous systems. Former Deputy Secretary of Defense Robert O. Work and current Vice Chairman of the Joint Chiefs of Staff Gen. Paul Selva have worked hard over the last few years to ensure that defense investments did not cross over into lethal autonomous warfare. Selva explicitly worked inside the Pentagon to avoid a rush to lethal robotic systems, which he called the “Terminator Conundrum.”

Consistent with that concern, the Defense Department has issued its strategy for developing AI capabilities which states:

The Department will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values. We will consult with leaders from across academia, private industry, and the international community to advance AI ethics and safety in the military context.

The Pentagon’s policy appears both sound and quite limiting. It should preclude the production of lethal robots presented in the short, but dystopic, Slaughterbots advocacy video, released in late 2017, showing a drone assassination swarm. Some in the AI community may have already decided that the national security community cannot be trusted with their products and they do not want to be in the “business of war.” They are not satisfied with the Defense Department’s policy to preserve “appropriate” human control over lethal force. But they should be made aware that their hypocrisy forces U.S. military members to take on more risk in dark, dangerous, and dirty environments.

The Pentagon’s clear policy should reduce the concerns of AI engineers. Both sides should remain alert to the promise and perils of the technology. Unless we can test and develop AI, the major driver of the emerging 7th Military Revolution today, we will be ceding distinctive advantages to less constrained competitors. This may end up making war far less humane and far more costly, for the military and to civilian populations. Were the techno-moralists to win their campaign, they would be responsible for the higher chance of U.S. national security interests being placed at risk due to critical vulnerabilities that are attacked by less morally-burdened competitors. This is more than “killer robots” or military-grade lethal autonomous weapons. One can readily see this impacting our homeland security (reduced security in space, cyber systems and personal data, missile, and critical domestic infrastructure). The AI/machine learning community may want to opt out of lethal capability development for the Pentagon, but if it consciously aids the Chinese military but not the Defense Department, for a profit, this community can expect a backlash.

The larger AI community wants to take part in the U.S. government market, including defense. It does not want to stand accused of being an elitist “fiscal one percent” that denies the “one percent” of Americans that serve the United States in uniform the advanced tools that will prove critical to future warfare. That said, a headlong rush that outsources decision-making to autonomous systems without consideration of legal and ethical ramifications is not in the U.S. national interest either. U.S. defense policy is properly oriented on precluding that rush. Microsoft’s employees should consider that and get on board, as suggested by Gen. Bob Scales.

The U.S. military should continue to pursue accountable employment of these technologies, as they are likely to be critical to future security needs. As noted by Lawrence Freedman in The Future of War: A History, we must explore claims about the disruptive changes in warfare if only to better understand the choices available to competitors. These choices need to be examined seriously but with healthy skepticism to assess their military value. The Pentagon should remain open to debate on the complex moral problems raised by robotic warfare and AI. Less hype from techno-enthusiasts and less hypocrisy from techno-moralists is needed for a start. Next, more rigorous hypothesis testing about the impact of disruptive technologies is needed to determine how to best manage our economy and improve how we make war.

CORRECTION: A previous version of this article erroneously stated that Ash Carter was secretary of defense in 2012. He was deputy secretary of defense.

 

Dr. Frank G. Hoffman is a retired Marine infantry officer and former Pentagon official who now studies military strategy and the future of conflict at the National Defense University. These are his own comments, and they do not reflect the position or policies of the Department of Defense.

Image: Thomas Ralph Spence