A Guide to Not Killing or Mutilating Artificial Intelligence Research
Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part d.) which asks for proposed changes to the acquisition process.
What’s the fastest way to build a jig-saw puzzle? That was the question posed by Michael Polanyi in 1962. An obvious answer is to enlist help. In what way, then, could the helpers be coordinated most efficiently? If you divided pieces between the helpers, then progress would slow to a crawl. You couldn’t know how to usefully divide the pieces without first solving the puzzle.
Polanyi found it obvious that the fastest way to build a jig-saw puzzle is to let everyone work on it together in full sight of each other. No central authority could accelerate progress. “Under this system,” Polanyi wrote, “each helper will act on his own initiative, by responding to the latest achievements of the others, and the completion of their joint task will be greatly accelerated.”
In the years after World War II, Polanyi heard many calls for the deliberate guidance of a national science program. These were inspired by the apparently successful record of Soviet five-year plans. Indeed, the Pentagon adopted the five-year plan in 1961, and it continues to shape decision-making to this day. Polanyi, however, thought it “impossible and nonsensical” to guide science toward particular ends. Like in the jig-saw puzzle, no scientist understands more than a tiny fraction of the total domain. Joint opinion is reached when each scientist has overlapping knowledge with other scientists, “so that the whole of science will be covered by chains and networks of overlapping neighbourhoods.”
In this way, the independent initiatives of scientists result in self-coordination. The overall progress of science proceeds at its fastest possible pace. Intervention by a central authority can only “kill or mutilate” scientific progress, Polanyi argued, it “cannot shape it.”
The dispersed knowledge problem found in jig-saw puzzles and science is also present for those who wish to direct an artificial intelligence (AI) strategy for the national defense. Military capabilities may never benefit from a single general AI application. Instead, they benefit from a variety of narrow AI applications. It seems that the effort spent developing an app for autonomous flight does not contribute much to an app for ground vehicles, let alone automating logistics, target recognition, command and control, or any number of other applications. Each app requires its own data inputs, metric selection, and training.
National defense will then be most improved through an ecosystem of diverse AI applications. The integration of AI into any given task or platform is influenced by a unique configuration of components. For example, an AI algorithm requires data to be trained on, which depends on the type of sensor being deployed. An upgraded robotics component may make new applications feasible. Cybersecurity concerns may affect the desired level of automation for various tasks.
All of these local conditions and more affect the likelihood of success for any number of possible AI applications. The information is not centrally stored anywhere or understood by any one person or group. Successful applications cannot be planned with certainty; they are arrived at through conjecture and trial-and-error experimentation.
“Artificial Intelligence is different,” said the Pentagon’s David Norquist, “because the potential benefits are less clear; you know what you’re going to get with a hypersonic missile.”
As Polanyi suggested, the fastest way to make progress in AI is to delegate the initiative to the operational level. Further, researchers must pursue AI projects in full view of each other. It is through their overlapping expertise that judgment is weighed over methods and results. Researchers must then have the authority to redirect their efforts based on criticism. Through a process of mutual adjustment to updated information, the whole AI program can proceed at its fastest possible pace.
By contrast, the Pentagon centralizes project selection. It presumes that the outcomes of a project can be known in advance. Changes to the plan require a new round of bureaucratic approvals. This is the legacy of industrial-era policies based on reproducible goods. Yet, it is now being applied to data and software. For example, the 2018 defense AI strategy put a new organization — the Joint Artificial Intelligence Center —in charge of establishing processes and selecting national mission initiatives. These initiatives align with objectives in the National Defense Strategy. Decisions flow from the top-down; from military policy to the center, and from there to the administrative levels and eventually to the researchers and users.
This kind of centralization is (ironically) unsuitable for AI projects. AI is largely an empirical process. Outcomes depend on the suitability of data. Rationally pre-selecting projects would require having complete information about the datasets, methods, and environments. This information, however, is precisely what the AI project is intended to discover in the first place. The decisions required of AI projects are incremental and bottom-up. Researchers solve many smaller problems that depend on local conditions. That requires speed. “Any decision that has to run up the chain to [headquarters] is a losing decision,” observed William Roper, the Air Force’s acquisition executive.
The Budget Process
Any national security strategy must consider the government budget as a reflection of political priorities. The Pentagon’s fiscal year 2020 request included $927 million in dedicated AI funding. That’s less than one percent of research and development, and close to a tenth of a percent of the total. For comparison, consider how Microsoft recently invested $1 billion in a single AI company. Or how Japan’s SoftBank Group raised a $108 billion fund for accelerating AI.
Perhaps more important than the size of the budget for artificial intelligence is the speed of decision-making. Sinking good money into bad projects will not help. Speeding up the process of selecting, funding, and iterating projects is a crucial aspect to any national security strategy for AI.
Many pious statements have been made about the importance of speed. Attempts to move faster over the decades by delegating milestone authority or shortcutting the requirements, contract, and acquisition processes have yielded few results. Unaddressed in past reform efforts is the budget process. It is the limiting factor to speed. Even using so-called “rapid acquisition” pathways, officials in the Pentagon still have to begin justifying most AI-related projects two or three years ahead of congressional appropriation.
China does not spend two or three years finding money. And startups cannot burden the overhead costs for so long. The Pentagon’s funding time lag has been cited as the leading cause of inadequate technology development:
The PM [program management] community cannot always predict the pace of innovation two years in advance, and funding may not be available for fast-moving S&T [science & technology] projects that are ready for transition. Therefore, a desirable S&T project may stall for 18 to 24 months, awaiting funding. This gap is sometimes called the “valley of death.”
During this time, the project loses momentum. Members of the team move onto other projects. Outsiders levy new technical requirements. Management reports and controls begin to multiply. Many of the innovations occurring at defense technology labs and in industry have failed to transition into fully-fledged programs of record. AI research has been going on for over 50 years at the labs, but projects didn’t start transitioning until after China’s progress in AI shocked policymakers. This inflexibility originates with the budget process, as a special committee on information technology pointed out:
The [Pentagon’s] process for obtaining funding for new acquisition programs typically takes multiple years… For solutions that will rely on information technology, the time frame for seeking funding can be many times longer than the actual time needed to develop or procure the solution. If it is to achieve a more rapid delivery of information technology solutions, the [Pentagon] will need a more responsive process for justifying and allocating funding to address capability shortfalls.
If AI projects are going to receive funding at the speed of relevance, then the Pentagon will require a budget justified by mission type rather than project line itemization. To illustrate, each military component could request a single account for artificial intelligence. Component acquisition executives may have the flexibility to delegate their authority. Subordinates could route AI funds to any number of applications without a reprogramming action.
Perhaps funding will be spread across the management of existing program elements to augment AI. Perhaps dedicated program offices will coordinate developments across the component. In any case, each AI project should have its funding levels balanced using updated information, not information available at the start of the programming cycle from two or three years before.
The mission funding concept is nothing new. Army arsenals and Navy bureaus were funded that way up until 1961. A return to earlier practices has been wisely defended by thinkers like Frederick Mosher, Aaron Wildavsky, and Allen Schick. Budgeting by mission also has contemporary examples. For one, the Pentagon mission funds upgrades to avionics software. Additionally, the Joint Improvised Threat Defeat Organization has successfully adopted agile development with the help of mission funding. This program allows managers to quickly redirect funds based on iterative feedback between researchers and users.
A different recommendation comes from the Defense Innovation Board, which proposed a new appropriation for software. It would allow programming information to be justified within a year of the budget appropriation rather than within two or three years. This is less desirable than mission funding because each project line item may still require the involvement of dozens of offices that have limited or no responsibility for execution.
Do No Harm
The national security strategy for AI is not like President John F. Kennedy’s call to put a man on the moon. There is not a well-defined goal with a single technical plan. Harnessing narrow AI means that there will be a multitude of applications, each with its own requirements, challenges, and user communities.
The strategy should not designate centralized offices to make project choices on behalf of the researchers. They are particularly susceptible to groupthink, prone to the neglect of alternatives, costs, and uncertainties. Instead, the ideal strategy requires empowering researchers by injecting speed and iterative learning into defense innovation.
If speed and iterative learning are the most important aspects of a healthy AI research program, then funds must be available when needed. In other words, a flexible funding mechanism is needed. The proper mechanism is a mission-funded AI account. Particular project line items would not require justification. Rather, the military components could direct the funds to AI projects based on their merit.
Instead of a list of directives, the top priority for a national security strategy is to fund an ecosystem of AI developments through a single budget account. Because mission-funding conceals detailed planning from policy makers, control must be exerted after-the-fact using data on actual costs and performance. Cost accounting and operational testing should provide transparency and accountability, not prescriptive mandates of what projects can or cannot be pursued.
Eric Lofgren is the Emergent Ventures Fellow at the Mercatus Center, George Mason University. He has a blog and podcast on weapon systems acquisition named Acquisition Talk. Before that, he was a Senior Analyst at Technomics Inc., where he supported Cost Assessment and Program Evaluation (CAPE) in the Office of the Secretary of Defense.