First, Manage Security Threats to Machine Learning

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It responds to question 3 (parts a. and b.), which asks what types of AI research the national security community should focus on how the government, academia, and the private sector should work together, and what type of infrastructure the United States needs.

***

The U.S. Army tank brigade was once again fighting in the Middle East. Its tanks were recently equipped with a computer vision-based targeting system that employed remotely controlled drones as scouts. Unfortunately, adversary forces deceived the vision system into thinking grenade flashes were actually cannon fire. The tank operators opened fire on their comrades two miles away. Although the U.S. brigade won the battle, they lost 6 soldiers, 5 tanks, and 5 fighting vehicles — all from friendly fire. The brigade commander said, “our equipment is so lethal that there is no room for mistakes.”

 

This story is based on an actual event. The tanks involved did not have automated computer vision systems — but someday they will.

Deception is as old as warfare itself. Until now, the targets of deception operations have been humans. But the introduction of machine learning and artificial intelligence opens up a whole new world of opportunities to deceive by targeting machines. We are seeing the dawn of a new and expanded age of deception.

 

 

Rapid advances in machine learning are enabling the creation of novel technologies for civilian and military use, ranging from video and text classification to complex data analysis and decision-making. However, the rush to implement and field insecure systems containing advanced machine learning components introduces dangerous vulnerabilities that will be exploited by nefarious actors in ways we have barely begun to understand. Some of the most promising applications of machine learning technology are decision support systems where life critical decisions are based on rapid analyses of enormous amounts of data. Introducing a deception into this type of system could have catastrophic consequences — like causing friendly fire or sending troops into an ambush. The growing reliance on machine learning and its susceptibility to deception has far-reaching implications for military operations.

Vulnerabilities in Machine Learning Systems

Inherent vulnerabilities in machine learning technologies must be recognized as a critical national-level concern that deserves a prominent, national-level response. While it does not appear to be possible to eliminate all vulnerabilities, it is possible to mitigate them. For example, a basic mitigation technique would be to make sure that a human operator has a clear and continuous view of the system’s behavior so that if something goes wrong, there is some type of manual override. As another example, in a system that is designed to learn and adapt to its environment, there should be continuous testing during operation to make sure that the system has not gone off track. There should also be a requirement for periodic human operator observation or interaction with the system to prevent the operator from being lulled into a false sense of security. Studying the security of machine learning systems may not be trendy but it is just as important as realizing their promise. There is a vulnerability at the core of machine learning — operators might know what a system was programmed to learn, but they simply cannot be sure of what the machine learning system actually has learned. This makes the vulnerability impossible to address completely. The vulnerability invites attempts to attack the machine learning system to test what it has learned and to find its weaknesses.

Although machine learning technology can be attacked in different ways, all seek to deceive a system into misperception, misclassification, or faulty decision analysis. Efforts to detect and counter deception are at a very early stage and currently lack a solid theoretical foundation.

Military Implications of Machine Learning Insecurity

Machine learning vulnerabilities have fundamental and far-reaching implications for the Defense Department. The Pentagon has a major push under way to integrate machine learning across the entirety of its operations to counter major competitors. This effort began in April 2017 with the establishment of the Algorithmic Warfare Cross Functional Team. The team’s goal is to use AI and machine learning technology to make sense of and integrate masses of data from disparate sources at high speed. The vision of highly automated data gathering and processing to minimize decision cycles is enticing and has the potential to keep the U.S. technological edge over sophisticated adversaries.

Beneath the hope and hype surrounding machine learning, there is a raging competition between attackers who are constantly finding new ways to fool, evade, and mislead machine learning systems and defenders trying to find the blind spots and eliminate vulnerabilities. This dynamic is here to stay because of the inherent vulnerabilities of machine learning systems. In fact, as machine learning takes over more and more tasks, new vulnerabilities are certain to emerge.

Consider the increasingly complex requirements for logistics planning and support on the modern battlefield. The application of machine learning will enable the military to be proactive and predictive as all kinds of tasks become automated and efficiency and accuracy reach new heights. However, adversaries can try to deceive such a system, so as to lead planners and operators to make catastrophic and potentially lethal logistical errors.

The introduction of machine learning in its current state of development greatly increases the potential for an attack, while relatively little has been done to develop an understanding of how to manage it. The question now is how can those who adopt machine learning — like the Department of Defense — manage it in a way to reduce its inherent risks to an acceptable level, while at the same time remaining competitive.

The Algorithmic Warfare Cross Functional Team had a critical shortcoming — it did not explicitly include a mandate to determine how to detect and counter attempts by an adversary to mislead and deceive machine learning technology. This was an opportunity lost. All future efforts should include a specific mandate to identify and develop mitigation strategies for vulnerabilities that are uniquely introduced by the use of machine learning technologies.

Some of these mistakes are inevitable, since a comprehensive theoretical understanding of the vulnerabilities of machine learning systems is currently lacking. This has been recognized by the research community and has led to recent research investments by, for example, the Defense Advanced Research Projects Agency (DARPA) and the National Science Foundation. However, these efforts are at an early stage, while the race is on to operationalize machine learning in fielded applications.

Some prior experiences illustrate the potential dangers and point to possible mitigation measures. In November 1988, one of the earliest computer viruses, the Morris Worm, wrought havoc on the Internet. Several weeks later, the first Computer Emergency Response Team Coordination Center (CERT/CC) was established at the Software Engineering Institute of Carnegie Mellon University to help industry, academia, and government manage cyber security. That model was soon reproduced the world over. The number of incident response teams multiplied over the next two years. However, a year later, the Wank Worm bug revealed a failure of the teams to coordinate globally. The result was the creation of the Forum of Incident Response and Security Teams (FIRST) in 1990. Its mission was to coordinate and facilitate collaboration and cooperation among incident response teams globally. The US-CERT, currently part of the Department of Homeland Security’s National Protection and Programs Directorate, is a FIRST member along with over 400 similar organizations worldwide.

Despite the heroic efforts of FIRST and all of its members, the Secretary of the Navy Cyber Security Readiness Review of March 2019 points out that “there are many bad actors,

but China and Russia in particular have focused their efforts in strategic ways and are executing at scale to achieve their objectives, while the US remains relatively flat-footed, and is too often incapable of defending itself.” Clearly, the United States faces capable adversaries willing to exploit vulnerabilities to their advantage. Washington should apply lessons from the cyber realm to machine learning.

How the Pentagon Should Manage Machine Learning Security

The Joint Artificial Intelligence Center (JAIC), the Department of Defense’s Artificial Intelligence (AI) Center of Excellence, together with the Department of Defense Labs and DARPA have already taken initial steps in the right direction. But those efforts are just a beginning. The Pentagon should not repeat the mistakes of the past when software and cybersecurity vulnerabilities were for too long treated as an afterthought instead of first-class national security threats. Developing strategies and techniques to mitigate and manage security problems in machine learning systems before they are developed and deployed should be a top priority. Machine learning security requires extensive organizational support and systematic and thoughtful funding.

Existing organizations like FIRST and all of its CERT members worldwide should be strengthened and extended. They will need to build out their efforts by directly addressing the increased vulnerability of computer systems that result from the introduction of machine learning technology. A classical computer security breach is defined as “any incident that results in unauthorized access of data, applications, services, networks and/or devices by bypassing their underlying security mechanisms.” However, a machine learning system can be deceived without accessing the system simply by exposing it to external inputs that will deceive it. For example, in a recent experiment, a simple rotation of a revolver in a scene caused a machine vision system to perceive the revolver as a mousetrap. This type of attack would be beyond the experience of today’s CERT personnel. So, existing personnel will have to be complemented with new kinds of expertise, and additional research agendas will be required to match the unique vulnerabilities created by the use of machine learning. They must recognize that while machine learning systems can exceed human performance in many ways, they can fail in ways unimaginable for humans.

Some of the methods for managing machine learning security will be analogous to those they already perform for classical software vulnerabilities. These should include the use of red teaming to discover and fix potential security vulnerabilities as well as enabling confidential reporting of vulnerabilities discovered in machine learning systems (including security vulnerabilities, potential adversarial inputs, and other types of exploits). While there is bound to be some uncertainty as to what the machine learning system actually has learned, critical vulnerabilities might be averted through the use of new types of formal verification to prove key properties of machine learning systems.

Proactive defensive measures should include “white-hat” efforts to predict how machine learning advances will enable more effective system attacks. There should be a research agenda to develop a comprehensive theoretical understanding of machine learning vulnerabilities. Such an understanding would be applied to the development of defensive and mitigation techniques. These techniques, in turn, would be used to identify, develop, and distribute standard tools to test for common security problems in machine learning systems. Potential hardware-related methods should also be considered like incorporating security features into machine learning-specific hardware like graphics processing units, field programmable gate arrays, and application-specific integrated circuits like tensor processing units (for example, to prevent copying, restrict access, facilitate activity audits, etc.). Other hardware-related measures should include determining the feasibility of designing hardware with security features as well as driving the adoption of such hardware.

While positive steps are being taken to address problems of security of machine learning systems, decisive action is needed to move consideration of such problems to a place of prominence in all of our efforts — not as an afterthought. The private sector, academia, and the government should all act as equal partners coordinating efforts and supporting each other. One way to accomplish this would be to strengthen and increase the visibility and prominence of the National Institute of Standards and Technology AI Standards organization created by Executive Order 13859 on February 11, 2019. NIST should make sure that the AI Standards organization incorporates members from representative private sector, academic, and government organizations (particularly agencies related to public welfare like the Federal Trade Commission Bureau of Consumer Protection, the Department of Health and Human Services, etc.) as an integral part of its work. This will help NIST facilitate critical interactions between the private sector, academia, and government. To strengthen the AI Standards work, there should be mechanisms to create enforceable standards regarding reliability and safety of machine learning applications in cases where public safety and welfare are affected.

Industry and academia are leading the way in machine learning research and development. Academia needs the financial support of industry and government. The government needs industry and academia to help formulate its needs and develop and deploy machine learning applications to its service. Without a whole of nation approach, we stand to create massive vulnerabilities that will dwarf the benefits of machine learning and make its gains elusive.

Conclusion

While the promise of AI and machine learning in military operations is alluring, the introduction of these systems also opens up a myriad of new vulnerabilities. These vulnerabilities are at the core of machine learning, and should not be ignored. As machine learning systems assume ever more important roles in military operations, their importance as targets for deception increases. An adversary can exploit machine learning to deceive and mislead, for example, a variety of decision support, planning, and situational awareness systems employing such technology. In a military context, faulty results from such systems can result in battlefield losses and casualties.

It is high time that the issue of vulnerabilities in machine learning technologies is treated as a critical national-level concern. It’s not possible to eliminate all vulnerabilities, but they can be mitigated. A new age of deception is upon us and we have to acknowledge it and act accordingly — the sooner the better.

 

 

Rand Waltzman is deputy chief technology officer at the nonprofit, nonpartisan RAND Corporation. He has served two terms as an AI Program Manager at the Defense Advanced Research Projects Agency (DARPA).

Thomas Szayna is a senior political scientist at RAND. His research has focused on strategic planning for the U.S. armed forces and the future security environment.

Image: U.S. Marine Corps (Photo by Sgt. Miguel A. Rosales)