Doors of Perception

Brady Perkins

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work.

 

“If the doors of perception were cleansed every thing would appear to man as it is, Infinite. For man has closed himself up, till he sees all things thro’ narrow chinks of his cavern”

— William Blake, The Marriage of Heaven and Hell

 

Visions of the future are inevitably anchored in the unconscious assumptions of the present (or past). The dystopian, “retrofitted” future of 2019 in Ridley Scott’s 1982 science fiction movie Blade Runner portrays a world of flying cars, digital billboards, and rebel robots that have developed human-like, subjective experiences. In predicting the future, however, what the movie gets most wrong is not Earth’s toxic atmosphere or robots, but the continued existence of payphones (albeit with video screens). Just like the past and present, the future is a point in time featuring context and constraints that are shaped by beliefs that are powerful and subtle. Prediction, in other words, should be left to mystics and algorithms. Likewise, we should view prognostications about the future of artificial intelligence (AI) with a skeptical eye.

AI is neither a panacea nor a pandemic, and the state in which machines can learn and take on any human cognitive load (known as artificial general intelligence or AGI) is still far off. Nevertheless, we believe the fundamental nature of the military should be in question and request that you consider our response in scope beyond the prompts. In considering your questions, we found that they don’t go far enough in considering the implications of AI.

Recent analysis suggests that AI is more akin to electricity than nuclear weapons. Although AI will have a transformative role in the future of war, it should not be considered a strategic weapon. Instead, America’s continued role in the world will be shaped by its ability to leverage AI (and other technologies) in strategically advantageous ways. It ought to be viewed as an enabling technology, or more appropriately as an amplifier to “strengthen conventional deterrence against pacing competitors.” In addition, AI can influence war by both unburdening humans from cognitive and perceptual tasks to “amplify” their intuition and creativity. Conversely, it can intensify and distort human perception in conflict, particularly of time and space.

 

 

By trying to contort ourselves within the existing warfighting framework we kept rubbing up against some concepts — renewed focus on classical maneuver warfare; gray zone conflict and proxy forces; urbanization and global demographic trends; anti-access, area denial (A2AD); and demonstrated resistance to change and technology adoption — which cause noteworthy cognitive dissonance. We are left with the uncomfortable conclusion that the existing culture of the U.S. military is the greatest barrier to capitalizing on AI as a general-purpose technology.

The Innovator’s Dilemma

The U.S. military — like armed forces in other countries — is inhibited by pervasive cultural norms with origins in early modern warfare and Frederick Taylor’s Scientific Management. In the era after the Thirty Years’ War, conscripted armies led by aristocrats were characterized by “drill, discipline, mechanical tactics and scientific gunnery.” Taylorism, by contrast, emphasizes micro-management and task-oriented optimization that treats human capital as interchangeable cogs. With the exception of the recent guidance from the new commandant of the Marine Corps, senior leadership’s visions of the future resemble the style of warfare they have studied (largely through a historical lens) for 40 years — large formation maneuver. Our services have a poor track record anticipating technological change. The Army before World War I, for example, thought tanks and planes had little value. We are concerned that the past is prologue with regard to adapting new concepts or technologies outside of crisis.

Recognizing these challenges, we want to offer our perspectives as mid-career technologists and veterans to help find a unifying vision for the future. Your first question considers AI’s impact on warfare, but this framing doesn’t recognize other factors. In particular, we cannot assume that the world will remain constant while AI develops.

First, it is vital to appreciate that war and security are far more than just soldiers, bombers, and nuclear weapons. Readiness and lethality are hot buzzwords at the Pentagon right now, and the most useful applications of AI may simply be in the same “mundane” areas of administration, logistics, and personalization that optimize our existing combat capabilities and stabilize strategic competition. These are not sexy and do not win in budget battles but this is what innovation most often looks like. These improvements liberate people — the most critical resource — to do what only humans can do — think critically. Incremental improvements accumulated over time and at scale have profound impacts and preserve our ability to be offset or overloaded — the crux of military advantage.

Second, AI is an enabling technology like electricity, not an offset by itself. At its core, AI is a combination of computing power and data that supports goal-oriented behavior in machines. Computers and data are not inherently dangerous nor are they in greater supply (or production) by a particular political or geographic group. As a result, AI will not just change war — it’s going to change everything — but AI will not be the only input reshaping the world.

Advances in AI are rooted in computers and networks reaching a tipping point. Those same factors are driving globalization and the hyper-connectedness of people and things which will change the world (and the security environment). One particular example is the concept of “fragmegration and the duality of connection. Social networks have the power to connect and empower communities around the world. This same power connects families and hobbyists while also connecting ideological extremists, polarizing them, highlighting contrasts, and sowing isolation among unpopular individuals and groups. In this way, globalization is increasing social unrest and conflict, particularly terrorism, gray zone conflict, and information warfare. If that’s the future of conflict, then our best bet is to use AI to address those security threats.

Lastly, as part of the third offset strategy, AI will augment and displace, but not replace, humans. So far, any time a breakthrough-AI has defeated a human at a complex task, the combination of person and AI go on to defeat the AI. In this way, the nature of conflict will adjust in ways that make the human experience of war both more violent yet more safe.

Assuming that we agree to the above assumptions and framework, we believe that answers to the remaining questions that you’ve proposed follow rather logically.

Skillsets and Education

The Defense Department needs to develop domain expertise in computer software and an ability to develop new products internally. AI builds upon foundational aspects of computer software, and the Pentagon doesn’t have the necessary experience. In 2018, the auto industry hired three times more software engineers than mechanical engineers, and the American Automotive Council recognized computers as a core part of their business. By contrast, the U.S. Air Force is the only service to track computer science skills or to be establishing that workforce. While the Army will pay anyone who can pass an exam in even basic Mandarin Chinese, they apparently could not care less about JavaScript. This needs to change just as the services added electricians, mechanics, and even nuclear technicians following past technological revolutions.

The services need an array of entry- and mid-level technologists (software developers, systems engineers, support engineers, cloud architects, and database engineers), and they need to empower them. The services should create basic systems and applications that will automate much of the mundane, day-to-day work that is still manual. Marines print certificates for online training so another marine can update a database; sailors submit thick, printed requests for vacation; and soldiers drive around bases to ensure vehicle yards are locked. When compounded at the department’s scale, this aggregates to hundreds or possibly thousands of person-years that could be saved with software.

Those working in AI should focus less on the hardware and more on integration. Many of the problems with existing technology stem not from bad hardware, but from bad user experience. The Air Force has already recognized the need for product and program managers to address this. These airmen will own the tools and create a customer service mindset — this is crucial. In addition to coders and engineers, these owners may very well leap over centralized current weapon, tool, or product development, and instead support rapid prototyping.

The Pentagon will need to educate the wider force, not just the experts. Just as nearly every American has a rudimentary understanding of electronics, so too every sailor will need basic familiarity with machine learning concepts. The burden will not rest solely on the military as even some elementary schools have already expanded “shop” class into broader industrial arts with 3D printers, robotics, and AI. We can also expect the creation of more no-code tools which the military can also leverage.

So, what’s left? For service members, the key here is focusing on safe use. In the future, the Marine Corps will learn to avoid complacency, think critically, and remember that they are on the loop as Paul Scharre illustrated in “Army of None.”

Research Priorities and Investments

We see three priority candidates for defense funding — explainability, alternative learning techniques, and extreme performance. Private sector AI research and development should be viewed most similarly to early electronics research. The military (and government broadly) should not expect the private sector to build weapons but rather tools that can be adapted for military purposes. Defense research is best suited for subjects that either have limited commercial value compared to security or subjects that are otherwise off-limits to civilian researchers due to law or hazard.

Explainability

It is critical that AI decision-making is explainable in order for AI to be entrusted with such grave power as lethal autonomy — a highly controversial subject. Molly Kovite made a similar argument recently in War on the Rocks. Some believe that the apex of military AI will be lethal autonomous systems. Corporations are less likely to prioritize explainability over the algorithm’s effectiveness. There are, of course, multiple possible ways to provide such transparency but none are validated currently, and continued defense funding would be well-invested in these areas.

Alternative Learning Techniques

The military should lead research in the next wave of machine learning techniques. For years, deep neural nets were the hot trend in AI research until scientists saw the limitations of supervised learning and turned to reinforcement learning. RL has its own limits, and we suspect that defense investment in alternative learning styles would yield strategic value. Research on adaptability, logic, and rules engines is making progress towards better AI, and the Defense Department is well-suited to lead further advances.

Extreme Performance

Defense investments in one-shot learning and low-power processing for use in edge compute settings are critical for achieving the AI-powered warfare of the future recently dubbed “hyperwar.” The nature of conflict is a contested environment with intense resource constraints, and American defense leaders must assume that our nation’s adversaries will have AI capabilities aimed at America’s current vulnerabilities. A breakthrough in a man-portable device with multi-day battery life, for example, would have many applications and would reduce the electromagnetic and thermal signatures of forces on the battlefield.

Investments

Since AI depends on data and computational power, decoupling one or both of those dependencies would be strategically valuable. America could invest in computing infrastructure in the same way that the country built highways for public use; however, we do not recommend this. Unlike as was the case with highways, the pace of technology is advancing so rapidly that it would not be wise for an entity as slow moving as the federal government to get involved in running data centers.

Creating large, publicly-available libraries of data would be invaluable. Whereas right now, data is being cloistered away by companies, liberating data for the public domain would support wider research and innovation breakthroughs that could yield defense applications. We recognize that there is a limit to what data could be in such libraries and that there is very likely a dearth of labeled training data in classified environments. The military and intelligence communities must find ways to bridge this gap and to get low-cost workers to label data (e.g., a classified Mechanical Turk) for future use. Such investments in data would support the digital shipyards of the 21st century.

Investments in AI research will only reap rewards if the government can leverage them quickly. AI is an extension of software and the steps taken (or recommended) within the Software Acquisition and Practices study are just the starting point for improving digital acquisition. Software development never ends — understanding this will be fundamental to acquiring next generation applications. With AI, the concept of DevOps, which combines software development and information technology operations, involves continuous integration and continuous delivery as models are constantly improved rather than finished. Further reforms in acquisition will also need to drive interoperability between platforms through common standards and serverless architecture.

Questions for Further Research

At the outset of your prompts, you said that the current visions of the future are either too mundane or too fantastic. Our sense is that this work could have an even greater impact if the problem space was approached slightly differently. We offer the following additional questions. The essential question is not about what might happen if the United States fails to develop robust AI, but rather what might happen if the United States fails to apply AI in ways that address the new threat environment.

In what ways do we imagine military applications of AI could develop dangerously or unpredictably and how can the United States prevent those scenarios?

Unpredictable self-learned lessons for AI need to be considered. Consider the case of AI that was developed to support recruiting engineers. It learned to bias against women because the training data was biased. The military is objectively more biased, and training data would be difficult to prepare. Using AI in personnel management decisions (recommendations) could contradict convention or strategic priorities.

Furthermore, consider if an AI learned to cheat. Imagine that the U.S. military fields a stationary, autonomous weapon system with computer vision. In testing, the system’s reinforcement learning engine learns to shoot occasional shots seemingly at random because (similar to AlphaGo) the randomness introduces fear and instability in the opposition and keeps enemy forces from even approaching the engagement area, let alone entering it.

What would a cloud-native, software-centric military force look like and how does the U.S. military get there? What barriers and incentives to AI adoption exist for the military (and other security stakeholders)?

Envisioning use cases of AI is just the first step in a much longer program. Work by Elsa Kania from the Center for a New American Security outlines how the Chinese military is driving their own transformation process in three steps: digitization, informatization, and intelligentization. Where should the United States reinvest the savings from AI? How should the U.S. military employ the displaced soldiers, et al.? These are second order questions that we should consider.

One opportunity is to invest in accelerating the security clearance process without increasing risk. This includes both corporate and individual clearances so that the U.S. government can ensure the right people and firms have access to information and can make mission critical investments before America’s competitors do — a growing fear within the Pentagon.

What other forces are reshaping the world order, and how can America, its allies, and its adversaries use AI for and against American interests?

Every day, companies die because they fail to adapt. The difference between Blockbuster Video and Netflix was not the use of AI. If Blockbuster had optimized its inventory, store locations, and recommendations, the company would still be bankrupt today from failing to see the shifting consumer trend. The risk to incumbents from upstarts comes from exploiting niche markets or new technology, and our military is arguably already falling behind.

How can the United States leverage AI to mitigate these new threats (and others) to stability? This is not purely a question for the Defense Department but rather the entire security establishment. In addition to globalization, we see climate change as a growing driver of humanitarian crises, especially related to food and water scarcity — two historic catalysts for social unrest and mass migration.

What impact will AI have on America’s strategic strengths and vulnerabilities?

Where is the United States strategically weak or vulnerable and how can AI address those threats? How will America’s adversaries react to various AI-enabled scenarios to disrupt America’s advantages? Is it more advantageous to use AI to enhance strengths, mitigate risks, or destabilize adversaries? For example, Russia is not pursuing AI to match the United States conventionally – massing autonomous systems is a competition America would still win. Instead, it allows Putin to remain a strategic player on par with America. American leadership should be looking for ways to support liberal democracy and negate Russia’s short stack strategy by reducing the cost of policing bad actors.

American military forces have spent the last 18 years fighting in areas where its exquisite technological strengths are of little value. If America’s new strength is AI, what are the new camouflage and IED? Decoys, face manipulation, and lasers are just some ideas.

These are subtly different from what you’ve proposed, but we believe that the additional questions widen the discussion to include other relevant factors which will yield insights and strategic opportunities that are currently disjointed. Ultimately, AI alone will not be a strategic advantage.

More Than Just Buzzwords

Senior military leaders want to envision a future of conflict that resembles the types of wars America has won before, but this is naive. America’s adversaries will not challenge it in the areas where America is strong. Near-peer conflict is a boogeyman used to dispel calls for reform, shrinking budgets, or accountability. There’s a reason that the U.S. military has not figured out how to buy software until 2019 — its leaders have preferred the status quo.

Not everything failed to come true in Blade Runner. In 2019, it is common for humans to talk to their computers as they did in the Ridley Scott classic. That said, we don’t need to predict the changes to warfare wrought by AI with any specificity to understand that our old operating assumptions will likely do very little to prepare the U.S. military. The services are not task-organized to confront this challenge in a meaningful way. While there may be pockets of tempered radicals affecting change in the Pentagon, there is not a broad-based recognition that the American way of war must change — soon.

To start, something akin to the SOF [special operations forces] Truths must be adopted as tenets regarding AI in war. The SOF Truths outline constraints which guide the development and maintenance of a special operations capability. Similar declarative statements can describe limits, expected outcomes, and values that should inform prioritization, key decisions, and constraints in developing AI in a military context. From there, the United States can develop a whole-of-government strategy to address AI in warfare that provides for the national defense and sustains American values of mercy and dignity in times of conflict.

 

 

Thomas Brady is a former congressional advisor, Defense Department civilian, and Green Beret who now works for a technology company in Seattle. He infrequently tweets at @theRealTeeBrady.

Jim Perkins works on national security technology projects in Seattle. Previously, he served on active duty for 11 years and is now a major in the Army Reserves assigned to the 75th Innovation Command. From 2016–2018, he was the executive director for the Defense Entrepreneurs Forum, a registered 501(c)(3) not-for-profit network of national security innovators. He tweets at @jim_perkins1.

Image: Libre Shot