Rise of the Machines or Just a Routine Test?

AI drone

On June 1, the Guardian reported that during a simulation conducted by the U.S. Air Force, an AI drone “killed” its operator. According to Col. Tucker Hamilton, chief of the Air Force’s AI Test and Operations, the drone had been ordered to eliminate enemy radar installations, and when it noticed that human operators could override its decision to fire, it decided to eliminate them so that it could continue with its mission unopposed. Even though it was only a simulation, and no one was harmed, as could be expected with a headline so good, the story was picked up by news outlets around the world. The initial story was retracted only a day later, but the damage had already been done — critics of autonomous weapons used the article to point out the many dangers of such systems. Yet, in the rapid flurry of critiques, one key point was lost: Events like those reported in the original article, that an AI-enabled system might exhibit novel and dangerous unwanted behaviors, are not science fiction horror stories. They are real and important elements of the testing and evaluation of new weapon systems in development. 

By virtually all accounts, the original story was in error — such a simulation never took place. But for the sake of argument, what if it had? What if the Air Force had performed simulations with an AI drone and found that it tried to kill its operator? Would this mean that such systems could never be trusted, or that machines would inevitably turn on their creators? Would it mean that the military should scrap the development of all autonomous or AI-enabled systems? 

Of course not. AI is critical to a number of existing combat systems and will be critical for many more in the future. If Hamilton’s story had happened, it would simply imply that increasingly autonomous systems may develop novel behaviors, some of them unwanted, and that the military and arms suppliers must be cautious as they move forward in developing these weapons. This is not to say that AI-enabled or autonomous weapons pose no ethical or legal challenges but to highlight that testing and evaluation of new weapons is carried out specifically to find such problems. And if problems are found, the response should be to address these, not to succumb to fear and abandon the project altogether. More importantly, thorough and transparent testing and evaluation are necessary for developing new weapon systems that are as safe and reliable as possible, but these practices are likely to be greatly undermined if openness about issues encountered leads to calls that all development be halted. When developing weapons there will be setbacks, but such setbacks are a critical element of moving forward in design. 

 

 

By taking seriously the realities of weapons development, along with the importance of intermediate failures and shortcomings, it becomes clear that militaries and governments need to pay close attention to when systems fail and why. However, such attention must be dedicated to removing problems. If some cannot be removed, it is then the responsibility of these agencies to determine if subsequent risks can be adequately mitigated or if the projects should be scrapped altogether. But, either way, in order to reach these conclusions, it is critical that development continues to move forward and not be derailed by every setback and every wave of critiques following a good headline. 

Killer” Robots

Autonomous weapon systems, often referred to as “killer robots” by critics, are the center of much public debate and international efforts to regulate their development and use have been ongoing for years. It is important to remember, however, that autonomous weapons have been a mainstay of modern militaries for decades. Both the U.S. Department of Defense and the International Committee of the Red Cross define autonomous weapon systems as systems that can select and engage targets without human intervention. While this includes any number of futuristic platforms that one might imagine, it also includes such things as loitering munitions, anti-radiation missiles, and anti-missile systems like the Phalanx used by many navies around the world. What critics of autonomous weapons commonly object to are not these tested platforms, but autonomous systems that might sometimes act unpredictably or would be incapable of adequately distinguishing between legitimate targets and those protected from attack. 

When Hamilton stated that an AI-powered drone “killed the operator because that person was keeping it from accomplishing its objective,” this played straight into the narrative that critics had been fostering: These weapons will be unpredictable, uncontrollable, and inherently dangerous things. 

Air Force spokesperson Ann Stefanek responded that “[t]he Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology”. Hamilton also amended his earlier words, saying that he was referring to a mere thought experiment and not an actual simulation that occurred, adding that “[w]e’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome.” Yet, even after news circulated that the original report was in error, critics remained stalwart that the case “nevertheless shows the urgent need for international law on autonomy in weapons systems to ensure meaningful human control over the use of force.” 

It is worth noting that this critique is somewhat beside the point, as Hamilton’s story focused on an AI-powered drone that was under human control, in that it could identify threats but could have its attacks prevented by a human operator. However, the terrible twist in this thought experiment was that the drone “got its points by killing that threat,” and so in order to maximize its points, it “killed” the operator who might prevent it from accomplishing its objective. When it was instructed not to kill the operator, it then “destroyed” the control tower allowing the operator to override its functions. 

Though the case at hand was hypothetical, and though the critics’ objection misses the mark (slightly), there is clearly something amiss when a drone turns on its handlers in order to “get more points.” The underlying problem is what is referred to as an alignment problem — how do you get an AI system to not just do what you explicitly say, but to do so in a way that aligns with the military’s underlying goals and values? The drone in Hamilton’s thought experiment was doing what operators asked of it — eliminating enemy air defense systems — but it was going about this in a manner totally out of sync with their overall goals and intentions, and directly acting against the operators’ immediate orders to boot. But do all of these things together imply that such a system would necessarily be a failure? More importantly, would finding such issues in a weapon platform under development really give us grounds to halt development or scratch that project? To answer that, we should ask ourselves what we want in our weapons and how engineering and design move forward in practice. 

What Do We Want in a Weapon?

Obviously, any weapon system will be expected to meet a number of requirements: It must be effective for some particular warfighting role(s), safe for soldiers to use, discriminate enough that it will not subject civilians to disproportionate risks, and ideally cost-effective. Autonomous weapons have further requirements as well, a central point in the U.S. Department of Defenses Directive 3000.09: Autonomy in Weapon Systems. Among other things, such systems must “allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” be subject to temporal, geographic, and operational constraints in order to minimize chances of mistakes, be sufficiently robust, be protected against enemy tampering and interference, and be designed so as to be understandable by operators, provide feedback on operations, and delineate clear procedures for activation and deactivation. Altogether, the Department of Defense sums up the requirements for autonomous weapons under five broad principles: responsibility, equitability, traceability, reliability, and governability. 

In addition to these concrete and general requirements for autonomous weapons, Dr. Andrew Hill and Col. Gregg Thompson of the U.S. Army War College explore “five giant leaps” they believe serve as benchmarks in the development of autonomous weapons. Two of these leaps, the spontaneous doctrine test and the disciplined initiative test, hold relevance for the hypothetical case Hamilton described. 

The spontaneous doctrine test involves deliberately placing a robotic system in a situation for which it is suboptimally organized or equipped for an objective, and then allowing it to explore different ways of fighting.

Now, while having an operator who can override targeting decisions should not necessarily be seen as a “suboptimal” organizational structure, there is a sense in which, at least from the autonomous system’s perspective, it is — the AI drone is instructed to destroy enemy air defenses, it identifies such defenses, but for some reason, it can then be prevented from doing what it was instructed to do. If the drone were to then find creative ways to get around this final hurdle, that would represent a possible leap forward, a formation of spontaneous doctrine allowing it to more completely fulfill its mission goals. When these “creative ways” involve the targeting of friendly personnel, designers obviously need to address this, but the fact remains that the drone still found a way to get around limitations on its ability to achieve mission success. 

The disciplined initiative test relates to justified acts of disobedience, when a soldier (or autonomous system) alters its orders or objectives in order to achieve even greater aims. Again, as above, the drone that targets its own operator to achieve these aims is clearly a failure, but it is also reaching toward this “leap” demanded of autonomous weapons — it is given a mission, and it does what is needed to complete that mission. Finding novel methods for completing tasks (or going beyond them) is a good thing, and it is the job of designers, engineers, and testers to ensure that none of these novel methods are in breach of legal or ethical principles governing warfare and that the system is otherwise functioning as intended. 

The Importance of Testing

This brings us to a critical aspect of the development of new weapons in war: testing and evaluation. In the current version of Directive 3000.09, one of the areas that saw significant additions from the last version was the required review process for autonomous weapons and the methods for “testing and evaluation” and “verification and validation.” Specifically, the directive stipulates that there must be an “establishment of minimum thresholds of risk and reliability for the performance of autonomy in weapon systems,” as well as “concrete, testable requirements for implementing the DoD AI Ethical Principles.” To achieve this, the directive places heavy emphasis on the development of administrative, human, and technical systems for supporting the testing and evaluation and verification and validation of (semi-) autonomous weapon systems. 

If a drone were to fire on its operator to “gain more points” in testing, it would certainly not pass the review process laid out in the directive. Moreover, one of the core reasons for doing testing in the first place is to see if such things are possible. If they are, then the system is not ready for deployment and requires more work. To demand that a system or platform behave and perform well in every single test is to expect that there will be no problems or unforeseen wrinkles in development, and this, perversely, is apt to make problems more likely. 

Transparency, Survivor Bias, and Responsible Innovation

In February 2007, a flight of six F-22 Raptors were flying eastward across the Pacific Ocean, when, suddenly, their computer systems crashed, taking down nearly everything else with them. In an interview with CNN, Maj. Gen. Don Sheppard (ret.) described the situation as follows: 

At the international date line, whoops, all systems dumped and when I say all systems, I mean all systems, their navigation, part of their communications, their fuel systems.

It seems that some part of the computer code in the aircraft could not handle the sudden jump backwards in time that occurred when the aircraft hit the dateline, and this caused a rapid shutdown across the whole system. Luckily, the aircraft were able to limp home and were fixed in short order, but things could have been much worse. 

But what if every time there was a setback or error found there came public pressure to abandon the project? Worse still, what if there was such pressure and Lockheed Martin knew there was this potential error hiding in the wings? (Pun unintended, but welcome.) Together, these would create significant pressure to not make mistakes, or at the very least, not show anyone that you have made them. In this case, Lockheed Martin could simply try to keep their aircraft from crossing the international dateline. Problem solved. Until, that is, such craft might be deployed to a combat environment in that area. 

By penalizing failure and putting pressure on those with setbacks, we are not likely to suddenly have better engineers, smarter scientists, or more diligent operators. We will, however, have teams that show fewer errors. And this is a problem. As computer scientist Harold Thimbleby aptly notes, “[p]eople, however well trained, will always eventually make a slip,” and sometimes these slips might have disastrous consequences. But by being transparent about mistakes, we can develop systems that make mistakes less likely or have enough safeguards in place to ensure that some mistake here or there does not result in catastrophic failure. 

With regards to the development of autonomous weapons, the story Hamilton gave (without any evidence) alone created a wave of hysteria concerning such systems and, even after the initial report was amended, critics remained staunch in their objections. But autonomous weapons are a part of modern militaries already, have been for decades, and will continue to be for the foreseeable future. Public outcry can still affect development efforts, though, putting pressure on developers to “not make mistakes” or, simply, not show anyone that they’re making them. This pressure could be harmful — making mistakes and learning from them is, as noted by mathematician Matt Parker, “how engineering progresses.” Developers build something, try it, look for faults, and, when they find them, learn what went wrong so that they can do better. By demanding that no mistakes occur in the first place, it becomes more likely that the mistakes that inevitably do occur will be swept under the rug rather than treated as the important lessons they are. 

Conclusion

The case recounted by Hamilton never happened. There was no AI drone either in simulation or real life trying to kill its operator or destroy its communications tower. But even if there had been, the response to such an event should not be immediate, fearful demands to halt the development of autonomous systems in the military. Engineering failures are opportunities for learning what went wrong and why, and indeed are key to improving not only design for specific systems but also understanding larger scientific realities. One must also remember that testing is never really complete, especially for military systems, as adversaries will continually search for ways to undermine them, sometimes in ways that may make weapons unpredictable. This is an unfortunate reality, and one must bear in mind that for any engineering enterprise, mistakes and accidents will happen. Testing and evaluation help to keep such failures to an acceptable minimum, but these will never go away entirely. 

Accepting that there will be problems, and indeed embracing these as learning opportunities, is central to progressing with design. A culture of fearmongering in the face of setbacks is not going to make setbacks less likely or ensure that design teams get it right on the first try. Instead, it will only ensure that companies are disincentivized from doing responsible and thorough testing and evaluation, and it will result in mistakes being hidden rather than explored. Such a situation is certain to create dangers of its own and will undermine much of the trust necessary for soldiers to be willing to use the autonomous systems designed to help them. This is not to say that criticism should be quieted, or that arms developers should be spared harsh critiques when merited. Some failures cannot and should not be tolerated, especially when these are allowed to persist in systems being deployed. However, we should ensure that our critiques are aimed at genuine problems and not at necessary steps in the design process (or indeed, at mere hypotheticals). Developers have a responsibility to both combatants and civilians around the world to ensure that the systems they design are reliable, discriminate, and subject to meaningful human control. All of this demands that weapon developers look for problems as hard as they can and then find solutions. Anything less is likely to produce weapons that are not trustworthy and that should not be deployed. 

 

 

Dr. Nathan G. Wood is a postdoctoral fellow at Ghent University and an external fellow of the Ethics and Emerging Sciences Group at California Polytechnic State University, San Luis Obispo. His research focuses on the ethics and laws of war, especially as they relate to emerging technologies, autonomous weapon systems, and aspects of future conflict.