Embrace Open-Source Military Research to Win the AI Competition

LEVEILLE II

Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the fifth question (parts a. and d.) which asks to what extent can the federal government rely on the private sector to develop its AI capabilities, what capabilities should the government develop internally, and what measures should it should take to ensure AI systems for national security are trusted — by the public, end users, strategic decision-makers, and/or allies.

 

What do nuclear weapons, microchips, and the internet have in common? They were all developed by the Defense Department. Moreover, their innovation was a Pentagon secret long before they become public. As scientists and policymakers grapple with the development of artificial intelligence, it’s worth considering whether the secrecy that guarded previous technical advances will hinder the artificial intelligence revolution.

AI doesn’t follow the conventional rules for muscling through to technological superiority, where success hinges on secrecy throughout the technology life cycle. Ironically, the Defense Department may be able to win the AI competition only if it embraces a more open research and development strategy.

 

 

Despite legitimate security concerns, the Defense Department should adopt a policy of disclosing algorithm implementations and government-owned data sets to advance its interests in AI innovation. This doesn’t just mean increasing participation in academic conferences and publishing internal research. An effective AI policy should go beyond merely adapting open-source algorithms.

The Fourth Industrial Revolution

The rise of AI has been likened to a fourth industrial revolution. Every player who can is working on it, and so far, the private sector is leading the way. This makes a quest for dominance in AI fundamentally different than military competition in domains such as hypersonic missiles or direct energy, where only well-funded militaries vie for superiority. One important but unexpected consequence of the public nature of AI research is that the Pentagon faces only minor risk in releasing its AI algorithms and data sets. It should carefully consider doing so.

I arrived at this view after spending nearly a decade developing and integrating AI algorithms under various defense research and development programs. I saw how failure to engage with the wider AI community hindered research conducted in an environment where external collaboration was restricted. The benefits of an open strategy are easy to infer by looking at the trajectory of AI research in academia and the private sector, where sharing source code and benchmark data — data used to train and evaluate algorithms — is common and the pace of innovation is rapid.

Open source lets researchers benefit from the insights of others and can drive “a constant stream of innovation.” It makes it easier to uncover and fix flaws in software and expose biases in data sets that are difficult for a researcher’s own colleagues to spot. The need for open-source tools was widely acknowledged by academia just prior to the recent boom in AI. Conversely, there is evidence that secrecy in AI impedes progress. Apple, for example, is believed to have lagged behind the rest of the AI community due to its closed research approach. Open source is widely practiced in software engineering and cyber security, where it is a known mechanism for scouting talent — a priority for the Defense Department.

But what are the risks of disseminating potentially sensitive AI technology? And what should not be disclosed?

Assessing the Risks of Open Source

To be sure, the Pentagon will have to protect sensitive AI technology as its own research advances. But, more than the technical details, what needs to be hidden are the intended uses of a technology. Disclosing an AI technology that inadvertently exposes previously unexpected strategic or tactical objectives could indeed provide critical information to potential adversaries.

However, the risk associated with disclosing technical details is limited. Progress in AI is largely driven by research and development activities of a large, networked, public research community — academics, an increasing number of AI companies, and even individuals who have access to low-cost cloud infrastructure. This is unlike progress in most other military technologies, which is driven by research from a few well-funded state actors. This fundamental difference has important repercussions for sharing both algorithms and data. Unless the U.S. government significantly leads or trails the AI community — which is unlikely — it faces only minor risks in releasing its algorithms.

A widely recognized contest, the ImageNet Large Scale Visual Recognition Challenge, shows why. Each year, multiple teams submit AI algorithms to try to obtain the highest accuracy on object detection and classification tasks from image data — capabilities of great interest to the Department of Defense. Open-source implementations and trained models are often part of the submissions. In 2017, 29 of 38 separate submissions reached accuracies greater than 95 percent in the object-classification task.

What matters most is not that accuracy is high, but rather that the top-performing teams are tightly clustered around the same level. This clustering may reflect convergence in algorithm designs, or limitations of the data set. On another benchmark — Kaggle’s non-targeted adversarial attacks, which aimed to develop ways to fool an image classification algorithm — the top five performers in 91 separate submissions reached accuracies between 75 percent and 78 percent. The same kind of clustering holds for many other relevant benchmarks. It is unlikely that a separate, secretive research team would be able to achieve significantly better results — even if backed by a government.

I’ll refer to the observation that top performers tend to cluster around a certain performance level as the “AI competition hypothesis.” Two implications of this hypothesis are worth exploring. First, near-peer adversaries who attempt to employ algorithms released by the military will observe only minor performance gains relative to what they can already find in the public domain. Russia or China may temporarily beat the AI community, but if so not by much, and probably not for long.

Second, algorithms designed to counter AI technology released by the Pentagon will not significantly outperform algorithms that an active open-source community would release in the public domain. By submitting its algorithms to mock attacks from the open-source community, the Defense Department can make them more robust and thereby preempt attacks by near-peers.

The AI competition hypothesis appears to overlook the fact that the military and open-source communities have only partially overlapping interests in terms of problem sets and algorithms. However, this is not a crippling oversimplification. Very similar algorithms perform well in drastically different domains, and advances in transfer learning and lifelong learning mean that, increasingly, algorithms trained on one problem set will also be easily applied to other problems.

Beyond Algorithms: Releasing Military Data Sets

The performance of an AI algorithm depends on the data that it was trained on, and military-relevant data sets only partly overlap with public data sets. For example, a Department of Defense data set for person detection would likely contain targets at a much lower resolution than in the ImageNet Large Scale Visual Recognition Challenge. Perhaps the military should release only algorithms, but not target data sets. This would make it more difficult for an adversary to develop operationally relevant algorithms. However, at least two arguments support releasing military data into the public domain.

First, the Pentagon releasing target data sets would draw creative minds in the AI community to work on problems directly relevant to Washington — whereas progress elsewhere is only indirectly relevant. If there is concern around releasing a particular data set, the Defense Department might opt to release only selected parts of it. Some sensitive data could be withheld to train and test promising algorithms. Or the Pentagon could release slightly different versions of the data than it would use to train and test its own systems.

Second, even a near-peer adversary with algorithms and data in hand must overcome tough hurdles to deploy the algorithms onto operational platforms. This notion is already articulated in Department of Defense strategy, which states that “success no longer goes to the country that develops a new technology first, but rather to the one that better integrates it and adapts its way of fighting.” To make use of data and algorithms in ways that would discomfit U.S. defense aims, an adversary would need advanced hardware and might need to modify its operational doctrine. For example, a highly accurate model for automatic target recognition may suffer a large drop in accuracy if provided the wrong type of imagery, deployed on the wrong hardware, or used outside of its intended “performance envelope.” Again, the AI competition hypothesis suggests that the minimal gains the algorithms offered would likely be outweighed by integration issues.

All in all, the risks associated with disclosure of selected algorithms and data appear manageable, provided that the Pentagon maintains a fast, continuous integration process. This process would leverage the latest advances from the AI community, add a layer of retraining and test and evaluation, rapidly update fielded platforms, and ensure that the Department of Defense deploys robust AI technology that can withstand the latest known types of attacks. The importance of a fast, continuous integration process has also been recognized in studies by the Defense Science Board and the Defense Innovation Board.

There are further ways of addressing security concerns associated with open source. The Pentagon should only release algorithms and data that don’t compromise other associated technologies, and it can always withhold whatever else it deems particularly risky. But it should recognize that most AI secrets can’t be kept long. In February, the nonprofit OpenAI research group decided not to release one of its models for fear that it might be used for nefarious purposes. By August, two recent graduates of a master’s degree program had recreated a similar algorithm and posted it to the internet for everyone to use. Upsets like this are not rare in AI. Recently a team of undergraduate students beat Google researchers on the DAWNBench benchmark test, and high school students ranked higher than trained data scientists in the UC Berkeley Real World AI Challenge.

Better to Lead By a Bit Than to Trail By a Lot

Unless the Pentagon embraces a more open approach to AI, it will be left behind. Private sector innovation in this space is too fast. The need for some secrecy is perfectly understandable — it will be important to safeguard information about how the U.S. government intends to use and apply AI. On the whole, however, the Defense Department should trust that more transparency in terms of algorithms and data sharing will yield beneficial results for the department and the nation.

The Department of Defense can ensure that adversaries aren’t taking advantage of this type of transparency. It should be possible to test when the AI competition hypothesis holds and when it does not. This could be done by measuring the evolution of performance across established benchmarks, or by organizing AI challenges or hackathons. The results would help Washington determine what should be made public. An open research and development strategy should help the Defense Department be a global leader in AI research, even if only by a small margin. The AI competition hypothesis suggests that top performers tend to cluster around a certain level, but performance can vary widely at the lower end of the scale. The alternative to an open research strategy risks leaving the United States trailing by a wide margin.

 

 

Jasmin Léveillé is an information scientist at the nonprofit, nonpartisan RAND Corporation. He was previously a lead research scientist at Scientific Systems Company Inc., where he led research and development efforts in machine learning for programs under the Defense Advanced Research Program Agency, the Air Force Research Laboratory, the Army Research Laboratory, the Naval Air Systems Command and the Night Vision and Electronic Sensors Directorate. He holds a master’s degree in evolutionary and adaptive systems from the University of Sussex and a PhD in cognitive and neural systems from Boston University.

Image: Flickr (Photo by Markus Spiske)