How the Endless War Came Home

6105199 (1)

Editor’s Note: This article was written for a conference on the “The Presidency and Endless War,” organized and hosted by the University of Virginia’s Miller Center of Public Affairs. The conference has been rescheduled for later this year due to the global pandemic.

 

Companies working for the U.S. military invented duct tape. And Silly Putty. And undershirts. The military needed waterproof tape to seal ammunition boxes, a synthetic rubber substitute to compensate for rubber shortages, and something to protect uniforms from wear and tear. Military necessity drove these and other technological advances, which were tailored to the enemies the military was fighting and the nature of conflict at the time. The endless war against al-Qaeda and its associates has similarly fostered a range of technological inventions or advancements, driven by the nature of the foes America has been fighting for two decades.

 

 

Duct tape is now used in half of all American households for various tasks, and Silly Putty is a ubiquitous childhood toy. But today, technologies developed for the endless war, particularly those that help the military and intelligence agencies, are coming home in less benign ways. They are also posing significant challenges to our assumptions about personal privacy. The executive branch has sorted through many of the legal complexities attending the use of drones abroad, and the overseas use by the United States of facial recognition software and other tools driven by artificial intelligence has not yet encountered legal challenges. However, the executive and Congress are only just starting to address the legal questions associated with the use of these tools in a domestic peacetime environment.

The Enemy in the Endless War

Although the endless war means different things to different people, I use this term here to refer to the U.S. conflict against al-Qaeda that was launched after Sept. 11, 2001, as well as America’s use of force against the different branches of al-Qaeda, the self-proclaimed Islamic State, and al-Shabaab. Each of these organized, armed non-state actors seeks to blend in with local populations and commit terrorist acts while remaining undetected. To combat organizations that use tactics and techniques radically different from those of state militaries, the United States had to develop new tools to allow it to identify and locate specific leaders and members of these groups from among tens of thousands of people — often civilians. In other words, identifying, locating, and targeting members of al-Qaeda required the United States to find needles within haystacks. These needles were in places like Afghanistan, Pakistan, Yemen, and Somalia. And as the endless war dragged on, actors inside the United States who were inspired by these groups took up the cause as well.

Using Tech to Find the Enemy

The United States has a long history of developing or fine-tuning a range of technologies to help it find its enemies. Consider its use of armed drones. During World War I, the United States and the United Kingdom first built rudimentary drones that they launched by catapult and flew by radio control. The U.S. military deployed unmanned aerial vehicles more widely during the Vietnam War to launch missiles, collect signals and imagery intelligence, and drop leaflets. The United States and the Soviet Union used drones during the Cold War to penetrate opponents’ airspace and take reconnaissance photos. In the late 1990s, the U.S. Air Force began working seriously on how to arm drones with missiles. But it was only in the aftermath of Sept. 11 that the United States began to use drones to conduct targeted strikes. The U.S. military and intelligence agencies sought to target individuals and small groups. Washington recognized that armed drones might prove a very useful tool to help identify and ultimately kill the needles it found in those haystacks. Although the drones were slow, they were largely safe from enemy fire and provided a light military footprint inside foreign states in which the non-state groups were operating. The first several drone strikes on al-Qaeda and other non-state actors took place in 2001, when the United States targeted (and missed) Mullah Omar in Afghanistan, and in 2002 in Yemen against Qaed Salim Sinan al-Harethi. The strikes continue to this day.

Since 2006, unarmed drones have become common in the skies above America. They have also proliferated around the world, raising a range of privacy and safety concerns and fears about their use by terrorist groups. Some actors are concerned about the use of drones by domestic law enforcement officials, while others worry about abuses by private operators. Amazon anticipates the use of drones to deliver packages, leading some to envision future skies full of buzzing quadrotors. In the United States, the federal government has been slow to enact basic drone regulations, and some of the state and local regulations are not well-tailored to address the risks. Armed drone use has also spread internationally, with states such as China and Israel, as well as the United States, selling these armed platforms to a range of other countries. The Defense Department’s and CIA’s extensive use of armed drones in the endless war seems to have normalized their deployment; other states seem certain expand their use in future conflicts.

Drones are just one of the needle-locating technologies that have grown during the endless war. Facial recognition software — a tool that can help militaries identify suspected terrorists in crowds, on video footage, or during operations — also has advanced quickly in the past decade. The earliest effort to develop facial recognition capabilities appears to have been undertaken by scientists associated with the Central Intelligence Agency back in 1965. From 1993 to 1997, the Defense Advanced Research Projects Agency ran a program that assessed prototypes of facial recognition software to encourage their further development. A range of computer scientists — many of them unconnected to the U.S. government — continued to improve the technology in the next decade, but it was only in the past ten years that facial recognition software has taken off. According to the New York Times, the National Security Agency ramped up its use of facial recognition technology “to exploit the flood of images included in emails, text messages, social media, videoconferences and other communications,” believing that “technological advances could revolutionize the way that the N.S.A. finds intelligence targets around the world.” U.S. military and intelligence agencies, as well as the Department of Homeland Security, local law enforcement, and private companies, now use facial recognition software regularly to accomplish their missions. They do so to identify criminal suspects who lack identification, process travelers in airports, and look for suspicious actors in security camera feeds. Surely the U.S. government’s keen interest in improving facial recognition software for its military and intelligence operations has facilitated its development for domestic use. The regulatory regimes for its domestic deployment are very thin, however; there are no federal rules and only a few state statutes regulating the use of facial recognition software. In these cases, the technology has developed faster than the law.

Other technology fits this pattern as well. The need to find needles in haystacks has prompted the U.S. government to invest heavily in artificial intelligence, which officials have used, for example, to identify people or moments of interest in drone footage. Because artificial intelligence is particularly good at spotting anomalies and patterns in vast quantities of data, federal and private entities are collaborating to develop more advanced capabilities to improve the government’s counter-terrorism operations. In a paradigmatic use of this technology, the government collected the metadata of all phone calls with at least one end located inside the United States, pursuant to section 215 of the USA PATRIOT Act. For nine years, federal agencies persisted in efforts to identify phone calls by, and connections between, terrorists, even though the program produced few leads.

The Two-Way Street of Technological Innovation

Not all of our recent technological developments have sprung from the necessities of fighting the endless war. The private sector has produced major technological advances apart from the government and its perceived military needs. But the nation’s military requirements — to fight non-state actors who deliberately try to conceal themselves within civilian populations — have spurred both the government and technology companies (and sometimes both working together) to develop tools whose purpose is to identify needles in haystacks. And it is common for technologies developed for military purposes (Silly Putty!) to find their way into everyday use. The endless war has done that for us, too. Drones, artificial intelligence, and facial recognition software are now firmly embedded in U.S. society, and have been normalized, in part, because the military has used them far and wide for 20 years. In short, the endless war has helped create tools that allow a host of actors, including the federal government, local governments, and private companies, to treat us in our domestic lives as groups of haystacks — or, if we are unlucky, individual needles.

Technology Transplants and U.S. Law

For the past 20 years, the U.S. government has had to work through hard questions about how to deploy these needle/haystack systems lawfully during armed conflict. It has faced litigation domestically over its use of section 215 and other electronic surveillance tools, as well as its use of drone strikes against U.S. citizens abroad. It has faced dissent from contractors working on AI-related military programs (such as Google on Project Maven). And it has faced legal and policy critiques by foreign states and a range of commentators about its surveillance and targeting decisions, including the U.S. use of force against non-state armed groups inside other countries with which it is not at war. Notwithstanding these challenges, it has reached a basic modus vivendi about how to use these tools in the endless war.

The same is not true for the domestic application of these technologies. Congress, local governments, scholars, and companies are all debating whether and how to regulate the use of facial recognition software by the government and private actors. We are nowhere near agreement on whether there should be basic requirements about the use of artificial intelligence and machine learning, either in the public or private sector. These technologies are complicated enough to think through when we are using them against enemy forces in the endless war. But the endless war — and the transplantation of these erstwhile military technologies into our everyday lives — is forcing us to think through the law, ethics, and morality of using these technologies inside our own society, against each other.

 

 

Ashley Deeks is the E. James Kelly, Jr.-Class of 1965 Research Professor at the University of Virginia Law School, where she teaches international law and national security law. She serves on the State Department’s Advisory Committee on International Law and the Board of Editors for the American Journal of International Law. She is the supervising editor of AJIL Unbound, a senior fellow at the Miller Center, and a senior contributor to Lawfare.

Image: U.S. Army (Photo by Sgt. Jaerett Engeseth)