The AI Literacy Gap Hobbling American Officialdom

Horowitz

This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It broadly addresses the second question (part b.) on expertise and skills.

 

Rarely is there as much agreement about the importance of an emerging technology as exists today about artificial intelligence (AI). Rightly or wrongly, a 2019 survey of 1,000 members of the general population and 300 technology sector executives found that 91 percent of tech executives and 84 percent of the public believe that AI will constitute the next revolution in technology. Along with the public, companies, universities, civil society organizations, and governments are all rushing to understand exactly what sort of impact AI will have on their respective daily operations. Most people will not be AI experts, but just as military personnel, policymakers, and intelligence analysts in previous generations needed to adapt and learn the basics of electricity and combustion engines in order to drive national security forward then, the same will be true of AI now. A renewed emphasis on AI education for those that will make key decisions about programs, funding, and adoption is essential for safe and effective U.S. adoption of AI in the national security sphere.

Within the U.S. government, there are several ongoing initiatives designed to ensure U.S. leadership in technology development and adoption. The February 2019 White House strategy on AI, for example, backed by an executive order, demonstrates broad recognition of AI’s importance. Within the Defense Department, the creation of the Joint Artificial Intelligence Center and Defense Advanced Research Projects Agency grant programs on AI highlight the military’s interest in AI. Moreover, U.S. leadership in AI may be one of the few areas of bipartisan agreement on Capitol Hill at present. The congressionally authorized National Security Commission on Artificial Intelligence, which released its interim report in November 2019, is investigating how to ensure the United States remains the world leader in AI research and uses algorithms in a safe and effective way in multiple areas.

 

 

Former Google CEO Eric Schmidt stated in 2018 that the Department of Defense “does not have an innovation problem; it has an innovation adoption problem.” This is congruent with a multitude of academic research findings on military innovation, which consistently show that success in innovation and innovation adoption is more about organizational change and bureaucratic politics than about technology invention itself.

Because AI is a general-purpose technology, the corresponding adoption challenges may prove especially difficult. There is a great deal of emphasis at present on how the United States can more effectively recruit and retain AI talent to work for the national security community and the government as a whole. This is critical, but the people making decisions about the use of algorithms from the situation room to the battlefield will not necessarily be informed about current developments in AI, let alone be AI experts, but military leaders and policymakers. Thus, a vital challenge is familiarizing and educating government leaders and policymakers about AI. This is a different challenge than that of incentivizing those with AI expertise to work for the U.S. government. Instead, it is about AI education for the policy community.

AI education for the policy community, from military leaders to Hill staffers to senior government officials, needs to happen now, as cases outside of the national security sphere illustrate. For example, in December 2017, the New York City council unanimously passed a bill that required the Office of the Mayor to form a task force dedicated to increasing transparency and oversight of the algorithms used by the city. A local city council member characterized the bill as essential, saying of the algorithms, “I don’t know what it is. I don’t know how it works. I don’t know what factors go into it. As we advance into the 21st century, we must ensure our government is not ‘black boxed.’”

This example touches on a key dynamic: Top policymakers — who are generally not technically trained — are at an increasing risk of being “black boxed” as technological complexity increases. This is especially true given questions even at the vanguard of AI research about the “explainability” of algorithms. However, organizational decisions about AI adoption and applications that “generally shape the impact of that technology” are being made today. Section 256 of the FY20 NDAA provides a first step towards national security literacy in AI, requiring the Secretary of Defense to develop an AI education strategy for military servicemembers. But there are questions about implementation, given well-known issues with professional military education, and the national security community more broadly requires AI literacy, not just military service members, since most policymakers are not in uniform.

The effective use of algorithms in the national security sphere requires basic education as well as recognition of what algorithms can and can’t do, the risks of automation bias and cognitive offloading to machines, the dangers of accidents and data poisoning, among other challenges. These need to be demystified and made relevant to national security officials, especially those senior leaders who may not consider themselves tech-savvy. It is impossible to hedge against potential dangers of a given technology if those using it do not fully understand and therefore many not be able to control it. Failure to control for biases, weaknesses, and accidents of even the most innocuous-seeming of algorithms has the potential to disrupt and create unintended policy outcomes that undermine US leadership in AI and US national security in general. Fortunately, there are a number of paths forward for effective AI education for end users, from online coursework to programs in civilian universities, to specially designed national security training programs.

Why and What?

In the years ahead, high level government leaders, despite having relatively rudimentary understanding of it, will make an array of formative decisions regarding AI, including funding, operations, assessing adversary developments, ethical and moral factors influencing adoption, managing associated employment issues, and so on. Adopting AI, from an organizational perspective, may therefore have several similarities to how entire organizations shifted in response to electricity, the combustion engine, the railroad, and other dramatic advances in prior centuries. AI is a wide-ranging technology that is already demonstrating the capacity to shape and alter the innumerable ways in which we engage with the world, from healthcare to soccer and everything in between.

Policymakers and national security officials will increasingly address AI-driven policies and technologies, make fundamental decisions based on machine learning techniques and algorithms, and help develop strong partnerships with those creating the technology. How do we ensure that decisions being made about AI at the top levels of government are well-informed ones, when policymakers, rather than technical experts, will make or review most of those decisions? And, practically speaking, what do they need to know?

Without baseline knowledge, policymakers won’t know what questions to ask, will be unable to frame what issues they are trying to solve as an “AI problem,” and might be overconfident in their understanding of AI and therefore what is feasible or practical. This could increase the chance of deployments of algorithms that lead to more errors and accidents than necessary. National security leaders need to be up-to-date on important innovations and factors that change global dynamics — such as AI has the potential to do. Being early adopters when it comes to functional knowledge for using algorithms is necessary so that leaders can conduct net assessments and subsequently make informed strategic decisions.

Beyond AI constituting an additional substantive dimension for consideration in planning and policy decisions, AI will also effectively change decision-making processes themselves. To the extent that some applications of AI, in the context of human-machine teams, may function as decision aids, those humans using the technology will need to have a level of understanding of how it works. To make optimal decisions about American AI policy and strategy, policymakers will need a clear understanding of AI principles and techniques — enough to understand how algorithms work and see potential benefits, but also recognize downside risks and avoid getting carried away by the hype.

First and foremost, decision-makers must understand the potential pitfalls, risks, and difficulties facing current AI technology. From technical issues such as difficulties in sharing data and the potential for data poisoning, to wider systematic biases in data sets, to “brittleness” when actors deploy algorithms outside the context of their design parameters, to possibilities of critical accidents, algorithms are anything but foolproof. To use AI effectively — which means safely and ethically, as well — requires that those adopting the technologies be keenly aware of these potentially devastating dangers and errors. If those in charge fail to account for challenges, they might end up doing more harm than good.

AI education for end users must provide leaders with a firm grasp of the basic underlying principles of AI and a grounded sense of its development trajectory. Akin to the combustion engine or electricity, AI is an enabling technology that will augment currently existing systems and enable new functions, rather than a singular piece of technology with a specific set of uses. Recognizing the general-purpose character of AI will help policymakers avoid the logical fallacy that AI is a sort of widget to bolt on to an existing capability to give it autonomous power. Coupled with recognizing the limits of algorithms, framing AI in this manner will allow less-technical security leaders to start to better envision where AI can fit into existing institutions and systems, and where it would be most useful to integrate or introduce. Finally, it is essential for U.S. leaders to be aware of the status of AI development in other countries, as well as AI-centric policies and regulations, funding mechanisms, organizations, educational initiatives, human capital systems, and best practices that other countries have employed to cultivate their own nascent AI programs.

How?

It is apparent that U.S. policymakers need to start making decisions about AI ASAP. However, in addition to short-term efforts, now is the time to initiate prescient medium- and long-term institutional and organizational shifts to ensure a technically competent national security leadership.

Short-term options for AI education could include knowledge developed through familiar methods in the national security community such as simulations and wargames. In addition, some think tanks and organizations are already working on reports targeted at senior leaders and officials, marketed as a sort of “crash course” handbook on AI usages in the public sector.

On a more content-based and technical side, providing concrete examples of applications that have been successful and unsuccessful, cognitive bias training focused on the risks of automation bias, and basic “coding 101” would serve as an appropriate bedrock of knowledge that senior leaders could then build on independently.

Another important pathway could involve executive education programs at civilian universities. Stanford University’s AI executive education program, for example, is designed to provide leaders with the information necessary to effectively and responsibly understand and use AI. Providing opportunities for relevant personnel to take those types of courses could pay dividends.

Of course, it is also the case that applications of AI in the national security community generate unique problems and challenges, particularly when it comes to the morality and ethics of the use of force. Therefore, designing and implementing specialized AI training courses for national security professionals is another promising course of action. One example comes from Georgetown, where Dr. Ben Buchanan and the Center for Security and Emerging Technology (CSET) have implemented a program in AI education for congressional staff. Scaling these efforts rapidly should be a priority.

Even if the United States succeeds in recruiting more AI expertise into the government, those experts won’t be everywhere at once. Ensuring that senior leader staff teams include at least one person with AI expertise would serve as a good way to support policymakers and arm them with on-demand knowledge. However, having an AI point person is insufficient to prepare policymakers for bigger-picture decisions down the line. Earlier training, selection, and promotion of those with AI knowledge within the policy realm, much like the United States’ incorporation of engineers in the late 19th and mid-20th centuries, would place America on a more direct trajectory of global AI leadership. However, relying on outside experts or one AI-trained member of staff will not be sustainable in the long term, as an increasing number of decisions require AI literacy and technical familiarity.

Finally, in the long run, AI literacy among those making decisions about the development, procurement, and deployment of algorithms will rely on improvements in STEM education, including at the K-12 level as well as in colleges and universities. The higher the baseline level of STEM knowledge among future leaders, the easier it will be to get them up to speed, in a way necessary for making decisions, about the capabilities and limitations of algorithms that could become increasingly important for American national security.

World-Changing Decisions Are Looming

Given the promise of AI, senior leaders will soon make critical decisions about AI adoption. It is vital that these leaders are aware of the significance, capabilities, and risks associated with algorithms so that they are able to make the organizational shifts necessary to solidify the United States’ status as a first-mover and position of strategic superiority. This can be accomplished through opportunities such as “coding 101” or AI boot-camps and executive education programs for current leaders, to promote those with AI expertise in government, and more widely emphasize STEM education for the benefit of future leaders. Such individuals’ decisions will directly determine the likelihood of whether the U.S. will be able to pull ahead in AI adoption and integration. It is essential that those decisions be informed ones.

 

 

Michael C. Horowitz is professor of political science and the interim director of Perry World House at the University of Pennsylvania. Lauren Kahn is a research fellow at Perry World House at the University of Pennsylvania. This article was made possible, in part, by a grant from the Air Force Office of Scientific Research and the Minerva Research Initiative under grant #FA9550-18-1-0194.

Image: White House (Photo by D. Myles Cullen)