The Devil You Know: Trust in Military Applications of Artificial Intelligence

MCLEMORE

Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It is based on a chapter by the authors in the forthcoming book ‘AI at War’ and addresses the fifth question (part d.) which asks what measures the government should take to ensure AI systems for national security are trusted — by the public, end users, strategic decision-makers, and/or allies.

 

On March 23, 2003, at the start of the Iraq war, a U.S. Army Patriot air defense battery shot down a British Royal Air Force Tornado GR4 over northern Kuwait. The Tornado’s pilot and navigator were both killed. The next day, a Patriot battery targeted a U.S. Air Force F-16 Falcon. The Falcon disabled the Patriot’s radar with a missile. A week and a half later, a Patriot battery shot down a F/A-18C Hornet over central Iraq, killing the pilot, U.S. Navy Lt. Nathan D. White. The U.S. Army was operating its Patriot batteries in “automated mode,” which allows missiles to fire without human interaction. After Lt. White’s death, the Patriot batteries were directed to switch to manual engagement modes requiring a human operator to give the order to fire. This change ended fratricide incidents by Patriot batteries. However, such a policy change may not have been possible had the Iraqi military posed a greater threat.

The Patriot was not the first automated weapon to cause problems — land and naval mines, for example, long predate the Patriot — and it won’t be the last. The key then and now is to hold individuals accountable for the actions of the military systems they oversee. This will be especially true for those that are AI-enabled, particularly when they are engaged in combat.

A New Vocabulary for AI  

Operators need a new vocabulary to assign responsibility for AI-enabled systems. The military should use the terms “restrained AI” and “unrestrained AI” to indicate the transfer of tasks between humans and systems. Systems operating with restrained AI are intentionally slowed so their actions are understandable, while those with unrestrained AI may take actions that are too fast for humans to understand. In combat, operating with restrained AI may be dangerous when facing opponents using unrestrained AI. At the same time, the belligerent best able to leverage unrestrained AI should trust it only to do the things it is trained to do.

 

 

A conscious tradeoff would have to be accepted between the speed of unrestrained AI and the risk of mistakes. For example, a commander in charge of a carrier group could quickly communicate orders to unrestrain point defenses in a crisis. Transferring authority to AI-controlled defensive systems, however, could increase the risk of friendly fire incidents. Were tensions to deescalate, automated defenses could once again be restrained. No such language currently exists.

Militaries should implement strong measures to ensure unambiguous human accountability over each of their automated systems, particularly weapons systems. Such measures are already in place for landmines. In South Korea, for example, the U.S. military place devices in clearly marked and tightly monitored minefields, reducing the risk of indiscriminate deaths. Nevertheless, if those mines did hurt the wrong people, those in charge can be identified and held responsible.

Accountability should be established for AI-enabled combat systems the same way it is for any complex and dangerous military system. Rigorous training should be mandatory, and policies should delineate the authority and accountability of commanders. Additionally, a transparent and thorough administrative structure will need to be in place for when things go wrong, to implement corrective actions, and hold the right people accountable in order to maintain the public trust in AI.

A comprehensively designed, well-trained, and exhaustively tested AI-enabled weapon system should be able to be trusted to reliably complete some narrow, specific, anticipated tasks with describable and predictable rules. This includes tasks that are complex and once required highly-trained human specialists (e.g., unmanned U.S. Navy aircraft can now land on aircraft carriers). The tested level of AI and the mission parameters certified as capable for AI systems should be the responsibility of trained military operators, not the vendors who build AI-enabled systems. Operators need to be responsible for properly scoping AI and informing safeguards to prevent AI taking undesirable actions. Militaries will need to ensure they have expert operators trained in the fundamentals of AI to ensure that the development and deployment of AI are thoughtful, flexible, and appropriate with regards to policy and performance expectations.

Dealing with Accidents

Inadequately trained and tested AI-enabled systems will perform solely on the preset decisions of their designers and operators, with potentially catastrophic results. Operators should learn the limits of their systems. For example, a robotic aircraft trained to transit open sky in autonomous mode may put itself in extremis when flying in congested airspace. Therefore, as mishaps involving AI are investigated, it will be difficult to determine what went wrong independent an understanding of the operational environment in which the mishap occurred.

After a mishap involving AI, the following questions will need to be asked: “Was the appropriate AI used?”, “Was the AI employed correctly under appropriate circumstances?”, and “Did the AI take unexpected actions under the circumstances?” An operator may have chosen to employ an AI capability outside of the validated tactics, techniques, and procedures, or a commander used AI that was not appropriate due to misunderstanding, or a design flaw went undiscovered during testing. In any case, the factors that caused the mishap can be determined to establish appropriate human accountability to reinforce the trust of the public.

Ethical Considerations

People should be held accountable for the actions of AI systems they oversee, not the AI systems themselves. AI can be coded with ethics, but it cannot sin, tell right from wrong, experience moral injury, or suffer — only people can. AI can be delegated tasking, but cannot be held morally accountable for the consequences of its actions. However, how will people remain accountable for AI’s actions when imperfect, opaque, automated weapon systems are increasingly acting faster than humans can understand?

AI can decipher many complex patterns faster than their human designers and overseers. AI-enabled weapon systems will increasingly act in ways that are too fast and complicated for humans to appreciate in real-time. Efforts to exert control, like slowing or simplifying AI, could hamper performance.

The military should prepare to implement strong measures to ensure unambiguous human accountability over each of their automated systems. Success will require infrastructure investment, policy innovation, and understanding AI’s strengths and weaknesses. AI is not a silver bullet — completely trustworthy AI-enabled combat systems do not exist, and may never be achievable. Despite AI’s limitations, it will deliver extraordinary advantages in combat. AI will increasingly execute many combat tasks, including weapon-target assignment plans, dynamic frequency allocation, and the coordination of swarming systems faster, better, and cheaper than any human. Trained military operators responsible for automated combat systems should be accountable for the actions of those systems. The trust that the public has in its military hangs in the balance.

 

 

Lieutenant Commander Connor McLemore is an E-2C naval flight officer with numerous operational deployments during 19 years of service in the U.S. Navy. He is a graduate of the United States Navy Fighter Weapons School (Topgun) and an operations analyst with Master’s degrees from the Naval Postgraduate School in Monterey, California and the Naval War College in Newport, Rhode Island. He is currently with the Office of the Chief of Naval Operations Assessment Division (OPNAV N81) in Washington D.C.

 Lieutenant Commander Charles Clark is a SH-60B/R naval aviator and currently serves in the Office of the Chief of Naval Operations Assessment Division (OPNAV N81). Lieutenant Commander Clark completed numerous operational deployments during 17 years of service. He is a graduate of the U.S. Naval Academy with a Bachelor’s of Science in Computer Science and earned a Master’s degree from the Naval Postgraduate School in Operations Research.  At NPS, he was awarded the Chief of Naval Operations Award for Excellence in Operations Research, and a certificate in Scientific Computation.

 The views expressed here are theirs alone and do not reflect those of the U.S. Navy.

Image: U.S. Navy (Photo by Mass Communication Specialist 1st Class Fred Gray IV)