Public Opinion Is a Key to America’s Global AI Leadership

Uncle Sam

“I want YOU for U.S. Army.”

Uncle Sam’s eyes lock in on the passerby from underneath a prominent white brow. Below his stare, a pointed finger issues a clear mandate: Your country needs you. Originally a symbol of patriotic duty, the poster today symbolizes a bygone era — one in which one’s obligation to the United States in the face of imminent danger was clear. In contrast, many of today’s most pressing national security issues can feel distant and theoretical. Artificial intelligence (AI) is one of the most significant examples of this phenomenon. It is a technology with the power to give a competitive edge to the country that dominates its development. Yet, it is intangible, complex, and little understood. As a result, the American public has not rallied around AI as an issue of growing national strategic importance.

To maintain the United States’ competitive advantage in global AI leadership, Washington needs to first convince the American public that global AI dominance is an economic and national security imperative. Fear, skepticism, and a lack of understanding are slowing AI adoption and the flow of data, even as China announced its ambition in 2017 to be the world leader in AI by 2030. Since that announcement, Beijing has made ambitious investments in AI, catalyzing competition between the United States and China to see who will lead the world in its development and application.

 

 

Maintaining the United States’ comparative advantage in AI is critical to the preservation of American global leadership. Whoever dominates the industry will not only enjoy decades of global economic influence, but will also shape the rules and ethical principles that govern how the technology is used. Yet, without the support of the American public, the United States may lose its edge in AI — and its chance to ensure democratic principles underpin the way this nascent technology is used around the world. Smart policies that can be explained to the country are needed to address privacy and ethical concerns, educate the public, and encourage AI adoption.

Why Public Opinion Matters

In a society whose institutions are designed to answer to the will of the people, public opinion wields tremendous authority. Most Americans believe AI will have a positive impact on people’s lives. However, they also have concerns about AI-enabled surveillance violating civil liberties, the spread of disinformation, cyber attacks, data privacy, and job loss. Exacerbating these fears, trust in government is nearing historic lows, following a long trendline of decreased trust over the last 60 years — only 17 percent of Americans say they trust the government “most of the time.” These concerns have created significant impediments to the United States maintaining its advantage in AI by slowing the adoption of AI in the United States, hindering the integration of AI in the defense industry, and curbing the free flow of data necessary to advance AI development.

Slow Adoption of AI

The United States already lags behind China in AI adoption. According to a recent study from the Center for Data Innovation, “only 43 percent of US individuals say their employers present the development of AI and the digital transformation of the organization as being strategically important, compared with 85 percent of Chinese individuals.” In contrast, China’s major cities have created AI strategies based on national plans, and are developing major AI industrial parks, research centers, and startups. Fears about job loss could also be slowing AI adoption. More than 9 in 10 Chinese individuals believe AI will create new jobs, compared to less than half of Americans. In addition, 73 percent of Americans believe AI will eliminate more jobs than it creates. People are also concerned they would not be able to receive the education and training they would need to adapt to an AI-driven economy.

Challenges Integrating AI Technology into the Defense Sector

Since the development of AI technology has largely been driven by the private sector, the U.S. government is dependent on private sector companies to acquire AI technology. Yet, ethical concerns over the military’s use of AI have threatened public-private partnerships. Google withdrew from several Defense Department projects after its engineers protested that they were incompatible with the company’s AI principles. This suggests misalignment between the values of the tech industry and those of the United States military. In contrast, China’s policy of civil-military fusion is a national strategy. Funds supporting military and civilian collaboration in China amount to over $56 billion, and Chinese companies are developing AI applications that could have military uses, including voice and facial recognition.

Prohibiting a Freer Flow of Data and Shared Data Sets

Americans have important ethical and privacy concerns about the use of data in AI applications. China’s use of facial recognition technology to target Uighurs serves as a cautionary tale. Facebook’s failure to prevent the dissemination of disinformation on its platform suggests that American companies can’t be relied upon, on their own, to protect the public interest. In addition, AI companies have violated social media sites’ terms of service to illicitly collect and use people’s data. It is critical that Americans grapple with these privacy issues and uphold personal freedoms. In the interim, however, these issues slow down data-sharing that is critical to the development of AI technology. China has fewer constraints regarding the collection and use of data, which constitutes an important short-term advantage.

Closing the Public Opinion Gap

Increasing public support for AI will require smart policies and savvy communications tactics. Communicators should make their talking points less abstract and more relevant to people’s lives. The AI industry should embrace a full discussion of the challenges and limitations of AI, and policymakers should hold companies accountable for failing to put users in charge of their data. Perhaps most important, those working in AI need to look outward and think about the public as their most important audience, instead of focusing on a narrow circle of experts and insiders.

Make AI Relevant to People’s Lives

First, communications specialists, policymakers, academics, and senior officials should identify messages and talking points that bear greater relevance to people’s lives. Fear-based tactics about the urgency of winning the “AI race” will not work because Americans are preoccupied with issues more immediately relevant to their lives. China does indeed pose a growing strategic threat to the United States. However, it is a mistake to assume that Americans will react to China with the same degree of concern today that they did during the Cold War. In contrast to the Soviet era, the United States is coming out of a long period of relative peace and economic prosperity grounded in a globally interdependent economy. To many Americans, the threat of China usurping United States global leadership may feel theoretical, geographically distant, and unrealistic.

An interesting comparison is the difficulty convincing people of the risk posed by climate change. Research from the American Psychological Association on the psychology of climate change shows people dismiss fear-based tactics on climate change, particularly when they have more immediate concerns present in their daily lives. Highlighting risk does not budge enough people when the threat feels abstract and long-term.

When it comes to AI, messaging should be anchored around arguments that bear immediate relevance to people’s lives. Moments in which the Chinese threat is invoked should be chosen strategically and used sparingly. The majority of messaging should focus on how AI can improve the economy, increase quality and length of life, mitigate the impact of natural disasters, and save lives on the battlefield. It is important that the “proof points” behind these arguments are specific, not abstract.

Be Upfront About AI Challenges

Second, the AI industry should drive dialogue about the challenges involved in developing nascent technology. Early-stage technologies with the potential to transform the world are rife with risks and unknowns. One need only look at the series of major disasters that contributed to the implosion of the nuclear power industry, or the significant setbacks facing the medical blood-testing industry after the fraudulent claims of Theranos’ Elizabeth Holmes. Whether it is vaccines, electricity, or steamboats, new technologies earn public trust over time as they mature, stabilize, and gain economies of scale. AI is a technology with a particularly large set of unknowns, given the iterative processes required for its development and its expansive potential. A traditional “decide-announce-defend” communications strategy breeds tribal thinking and stifles debate. This approach is almost guaranteed to gloss over these unknowns and make open discussion an afterthought.

Instead, AI requires a sophisticated communications approach that not only explains the technology’s benefits, but also invites candid and nuanced conversation about its risks. One way to open this dialogue is to appeal to American values of grit, innovation, and creativity. Instead of looking at AI as a threat to be contained, it can be viewed as a challenge to be solved and mastered by American ingenuity.

Public representatives of AI initiatives should open this dialogue at staple mainstream events (for example, South by Southwest and the Consumer Electronics Show) and media outlets — not just insider publications and defense forums. Talking points should be accompanied by concrete articulations of how the public can help shepherd AI through its infancy, from supporting lighter regulations to gaining fluency in the basic vocabulary of the technology.

More could be done to facilitate dialogue about AI online. Khan Academy, for example, is a nonprofit that offers free online courses on topics as varied as managing personal finances and the basics of electrical engineering. A government-sponsored online AI “university” along those lines could offer similar online classes, webinars on AI developments, and digital Q&As with the industry’s leading experts. The goal would be to educate the public and create a monitored forum for conversation.

Require Companies to State Data Policies More Clearly

Third, policymakers should explore legislation that would require companies to state their data policies in layman’s terms. This would include a summary of how the company collects, stores, and uses personal data, along with the option for a user to opt out of data collection. These disclosures would need to adhere to a standardized national template and user interface to make information easy to understand.

Policymakers and communicators responsible for AI programs could build off this direction to position the United States as the global vanguard of responsible AI development. The Defense Innovation Board’s comprehensive set of AI ethics prioritizes the protection of civil liberties. However, it is not apparent that the government will fully capitalize on this moment to earn the recognition it deserves. The new principles should be pitched to major news outlets, podcasts, and television series as a development of national importance. Coverage of the principles should intentionally put pressure on tech giants to improve their own ethical guidelines.

Move Beyond the Expert Community

Fourth, AI communicators should prioritize a broader subset of the public as their most important target audience. Much of the conversation on AI today happens within a community of experts and insiders. Communicators should actively broaden their focus beyond policymakers and defense insiders and start speaking to younger Americans and mainstream audiences. This will require a significant shift in their approach to communications. The public expects brands (the government being one) to articulate a sense of higher purpose, reach people on their preferred channels, and provide information that is consistently useful. Communications can no longer take the form of bland press releases, PDF downloads that are never read, or articles buried in websites that have no significant readership. Instead, communicators could pitch an AI documentary idea to PBS, set up a virtual reality AI lab at a major conference, or release a starter AI curriculum that teachers can use in schools. They could pitch a multipart podcast series to California media outlets to promote the Joint Artificial Intelligence Center’s work to fight wildfires through its Humanitarian and Disaster Relief project. The possibilities are endless, but all require careful planning around an intentional target audience.

Conclusion

Despite its growing strategic importance, the American public has yet to recognize the full extent to which AI can impact American global economic and security interests. This is partly a result of missed opportunities to foster a common understanding between Washington and the public. By adopting more targeted messages, acknowledging AI’s limitations as a developing technology, holding companies accountable for responsible communication, and inviting a conversation that extends beyond Washington insiders, public opinion may start to shift in support of the United States’ strategic AI imperatives.

Invention is a key part of America’s identity. It is part of the foundation of the country’s success and is deeply intertwined with the American dream. Yet, The Economist suggests we should “doubt all stories of technological determinism. It is not the essential nature of a technology that matters but its capacity to fit into the social, political and economic conditions of the day.” Winning over public opinion is a key ingredient to the success of any transformative technology, and taking it for granted is a stumbling block from which many technologies never recover. Resources invested in winning over public opinion can enable the United States to maintain its comparative advantage in the development and application of AI, and ensure that democratic ideals guide the ways the world uses it.

 

 

Merrill Wasser is a Vice President at Atlantic 57, a business division of The Atlantic that provides clients with brand and experience design services. She consults with Fortune 500 companies, think tanks, and universities, drawing from over a decade of digital communications and marketing expertise. She was a Fulbright Scholar in China and holds a degree in East Asian Languages and Civilizations from the University of Pennsylvania.

The contents of this essay reflect the author’s own personal views and are not necessarily endorsed by her employer.

Image: Public Domain Pictures (Image by Ken Kistler)