Protecting American Investments in AI

map

Artificial intelligence is the most disruptive technology since the advent of nuclear fission. With market growth projections in the billions, AI is likely to infiltrate every aspect of life in subtle and not-so-subtle ways. Intelligence services will acutely feel the impact of AI as it opens new opportunities for and vulnerabilities to stealing secrets. Both Russia and China have prepared for this future through rapid AI adoption, using economic espionage against AI companies, and optimizing offensive intelligence to defeat opponent AI programs. The United States is woefully unprepared for the methods these adversaries will employ. Developing a robust counterintelligence strategy, security posture, and partnership structure for AI is vital to protecting the American economy and its intelligence services.

If intelligence is collecting and understanding secrets, counterintelligence is the other side of the coin. Counterintelligence protects sources and methods amassing information and disrupts adversary intelligence collection. Counterintelligence, as it stands today, is mostly reactive, under-resourced, and viewed as a paranoid sect of the intelligence community. As a result, the community’s major AI investments do not include counterintelligence missions. U.S. intelligence uses AI mostly for dissecting information about other countries’ militaries, processing dynamic data, and finding strategic weapons. All the while, counterintelligence considerations to protect those investments and negate adversary AI ecosystems are falling by the wayside.

 

 

Killing Human Intelligence with AI

This neglect is being acutely felt in the discipline of human intelligence collection. The oldest form of intelligence collection is the art of getting people to share something they otherwise would not. Human intelligence is especially valuable because it provides nuance and detail about complex topics of interest to policymakers. It is well-suited for determining the strategic intentions of a nation or understanding an influential person’s decision-making process on an issue. This degree of insight can directly shape how a national leader approaches an economic policy negotiation or cause a shift in resources to exploit an enemy’s military weaknesses.

Human intelligence relationships are built over years and rely on high degrees of trust between the collector and the source. Consequently, collectors rely on discretion to foster trust and extract insights from sources. Today, both Russia and China have deployed a vast, AI-enabled surveillance net aimed at detecting anyone they view as subversive to the state, including people suspected of working for foreign intelligence services. The Atlantic’s Ross Anderson aptly describes China’s system:

China already has hundreds of millions of surveillance cameras in place. [President Xi Jinping’s] government hopes to soon achieve full video coverage of key public areas. Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources — travel records, friends and associates, reading habits, purchases — to predict political resistance before it happens.

In 2020, a telecommunications firm estimated Russia has deployed over 13 million cameras and uses facial recognition algorithms in 10 Russian cities as part of their criminal investigations. While it is not known if this system is linked to the Moscow Metro’s cashless payment system, which also uses facial recognition software, each system allows Russian security services to identify the movement of any person anywhere in their capital, where their most powerful policymakers live and work.

China and Russia are known in the intelligence business as “high counterintelligence threat” environments because their intelligence services are optimized against foreign operations within their borders. U.S. underinvestment in countermeasures to these Russian and Chinese AI installations has effectively removed human intelligence from the collection deck. Moreover, as Russia and China harden their domestic environments, they are also attacking the soft underbelly of Western AI programs through economic espionage.

Stealing AI From the Inside

Russia and China use their intelligence services to fuel their AI research and development. China sees trade secret theft, primarily through cyberspace, as a critical path to “leapfrog” and undermine U.S. AI capabilities. Similarly, ever since the end of the Cold War, Russia has relied on “insiders” and criminals to overcome its structural research problems. In economic espionage, Russia is known for operating in the gray space between criminal activities and official acts of state. Russian intelligence can tap vast hacker networks, use insiders, and have plausible deniability if their surrogates are caught.

The new “insider threat” for AI is venture capital. The Department of Defense formally acknowledged in 2021 that Russia and China are funneling state money through venture capital companies. Adversaries know that when fledgling AI companies are courted with large amounts of cash, they rarely ask a lot of questions about who’s paying. After the funds are provided, Russian and Chinese surrogates can gain access to privileged company information, seat board members, and encourage company investments away from their opponents’ national security markets. States can also use venture capital as an attack vector for opponent AI programs. Imagine an adversary who wants the Defense Department to divest from a particular AI company. A meager, public venture-capital investment by a Russian or Chinese state-owned enterprise could cause the Defense Department to cut ties and either contract with a less-capable company or jettison the AI capability altogether. This could become a powerful method to use U.S. government bureaucracy and procedure against itself.

New Frontiers of Counterintelligence Against AI

Human intelligence and the research centers of AI are just two of today’s known avenues of attack for Russia and China. Not yet fully understood are the wide range of low-cost measures designed to undermine U.S. AI investments. The U.S. intelligence community’s vocal interest in using AI for intelligence sends a strong strategic signal to Russia and China. Both are likely posturing their intelligence services to open new frontiers of offensive intelligence operations against AI capabilities employed by the community. These adversaries rightly wager that the United States will underinvest in counterintelligence protections and thus open its AI platforms to weapons which corrupt algorithms, pollute data streams, and influence the people interpreting AI insights.

One such armament is “adversarial AI.” Adversarial AI was originally intended to test the accuracy of a given result from a machine. Russian and Chinese intelligence services will likely use adversarial AI for model corruption or confidence erosion. An attacker needs only to adjust the system’s insights toward an outcome that is either obviously wrong or pushes the opponent toward a decision unfavorable to the facts on the ground. On the other hand, if the obvious error becomes known, any ethical organization would pause operations. In either case, delaying or casting doubt on a perfectly well-run AI program is a win for the adversary.

Upstream of the algorithmic processes is the data itself. Data poisoning or data pollution has already been identified as a challenge in the field. Capable intelligence services could hack training data or real-world data sources to corrupt an AI solution. If the training data is poisoned, the AI will misinterpret real-world data and surface erroneous results in deployment. Attacking real-world sources of data has a similar result, but may not be as easy to diagnose without a verified, unmodified copy of the original dataset. Data poisoning is particularly nefarious because it is not visible to the engineers or users until significant anomalies are evident — if they are ever recognized at all. Moreover, if there is a data source which is wholly unique and does not have a backup copy, data pollution can mean the total corruption of a data source and deprive the opponent of critical information forever.

Lastly, while there is a great deal of discussion around mitigating bias in AI data models, the cognitive bias of users to trust the results of a technical solution is often overlooked. This “technology effect” is defined by a “tendency toward excessive optimism in decision contexts where the impact of ‘technology’ is made salient.” In other words, when humans are presented with an answer derived from technology, they are statistically more likely to choose the answer from the machine.

While much can be done to disrupt an AI program in development, opportunities may be limited when opponents build solutions on closed networks and are regularly testing the platform’s performance. Adversary intelligence services may hold in reserve their instruments of disruption, allow high-reliability AI solutions to flourish, and — once the opponent’s confidence in their AI solution is unwavering — attack the platform. If nothing else, intelligence services are patient.

Operationalizing Counterintelligence for AI

When you are facing a multi-tiered intelligence threat, your counterintelligence posture has to be equally sophisticated. For the United States, it is particularly difficult to preserve its AI advantage while defending an open society. This challenge is compounded when its counterintelligence authorities are divided and the strategy is unfocused. Given the stakes of the AI race, strategically focusing resources, granting new legal authorities, and collaboration with industry will be vital to protect civil liberties while confronting the threat.

Focus on China

In any counterintelligence program, it is impossible to protect every nook of the attack space. The best hope of success in counterintelligence against a capable adversary is to focus resources and attention. Right now, China’s intelligence operations are clearly the highest threat to the U.S. AI enterprise — with Russia a distant second.

Russia’s AI programs are progressing, but they are far behind China. According to the Carnegie Moscow Center, Russia planned to spend approximately $3.9 billion on AI development from 2020 through 2024. Georgetown’s Center for Security and Emerging Technology estimates China spent somewhere between $1.6 billion to $5.4 billion on civilian AI research and development in 2018 alone. Additionally, the imposition of economic sanctions for Russia’s invasion of Ukraine likely imperils any additional funds to grow their AI programs.

Moscow’s lack of investment may force it to steal the AI secrets it needs. This “on-the-cheap” approach will likely center around augmenting its current misinformation operations with AI to advance Russia’s geopolitical goals. For the near term, the United States will need to deploy misinformation countermeasures against Russian criminal hackers and partner with international law enforcement to make arrests.

Applying the majority of counterintelligence for AI resources against the Chinese threat, while preparing for Russian operations, is appropriate given the scope of the challenge. The United States and its allies will need to monitor rising AI competitors for partnership opportunities on enduring strategic needs and abate the emergence of military AI programs in Iran and North Korea.

Recruit Meaningful AI Partners

The United States should be prepared to join forces with non-traditional foreign partners and non-governmental organizations with counterintelligence in mind. Perhaps the United States should think of potential AI collaborators as human intelligence sources worthy of recruitment.

The Joint Artificial Intelligence Center’s Partnership for Defense dialogue is a good place to start. This semi-quarterly gathering of 16 participant nations is largely focused on historical alliances in Europe, Asia, and the Middle East. The Partnership for Defense does not include representation from any countries in Africa. Given China’s interest in Africa, neglecting African nations risks being too narrowly focused, ignores the counterintelligence implications of Chinese political influence, and the proliferation of its surveillance technologies across the continent.

Countries with unique “placement and access” in confronting mutual enemies is a good indicator for alliance-building in AI. As part of that recruitment, counterintelligence tradecraft becomes helpful for assessing partnership opportunities; vetting shared information for veracity, reliability, and security; hardening partner AI ecosystems against adversary intelligence services; evaluating the accuracy of partner AI solutions; assessing and mitigating risks of betrayal; and installing trust-building measures on both sides.

Empower Industry Against Adversaries

Government does not succeed in AI without industry. Unlike the United States, China and Russia leverage state-owned enterprises to develop their homegrown AI solutions. While Russia is struggling to motivate its industry to meet its AI objectives, China’s integration with its domestic AI industry is considerably more robust and allows it to direct its military industrial base to quickly meet national security objectives. China’s intelligence services, through the extraction of economic secrets, fuel this agility. As a result, the AI industry finds itself on the frontlines of an economic war. They are the first to recognize attackers, they are the first to see new techniques employed to steal their intellectual property, and they are in the best position to respond with speed.

Without a consolidated, rationalized approach to counterintelligence for AI, American industry and innovation will be exploited at the hands of any adversary. Given that the AI industry is uniquely at risk, the U.S. government should collaborate with industry and foreign governments to craft a counterintelligence strategy which appropriately recognizes the need to protect AI as a strategic asset. This strategy should be resourced to help the AI industry establish corporate espionage detection, reporting, and abatement capabilities in accordance with recent Security and Exchange Commission regulations. The volume of attack vectors means the government and industry will, ironically, need to use AI to identify trends and threat actors as well as to surface strategic opportunities to counter the threat in a cross-government, international context. The collected data from these threats needs to be broadly shared in counterintelligence reporting systems within the U.S. government and with allies. Finally, the United States should not be shy about using its offensive cyber authorities, in partnership with industry, to neutralize or defeat adversary attacks on AI strategic assets in compute infrastructure, data, or algorithms.

This approach is both defensive and offensive. It protects the secrets and data sources of the AI industry and equips industry with the means to quickly respond to nefarious foreign activities. Most importantly, it forges a consequential, operational role for the AI industry where it is an equal partner with friendly intelligence services.

Conclusions

To date, the maturing counterintelligence threat to the U.S. AI industry has been ignored. The National Counterintelligence and Security Center last updated its Corporate Counterintelligence Guide in 2013. The document does not include the phrase “artificial intelligence.” The FBI, which has responsibilities for countering economic espionage, has one official interface with industry: the Office of Private Sector. This office has a wide mandate to protect infrastructure, coordinate on national events at risk of terrorist attack, identify any potential crimes against individuals and corporations, and partner on countering economic espionage threats. This office largely filters industry’s access to FBI field offices. It does not have the protection of industry’s AI assets from foreign intelligence in its mission. The Department of Homeland Security has made strides in cybersecurity. However, the approach to countering AI threats to industry is underwhelming. The “Improve Public Trust and Engagement” initiative from its AI strategic plan promises only to “communicate the identification of AI-related risks when practical and considering intelligence collection, law enforcement, and military equities.”

If the United States intends to spend $4 billion or more in AI over the next fiscal year, protecting that investment is vital. While it appears that Russia will not be able to overtake the United States or China in AI, it would be unwise to rely on their incompetence. Russia’s desperate position will only put more pressure on its intelligence services to perform. China has invested and gained too much to abandon its illicit procurement of AI secrets and technology. Both nations are employing AI to undermine the U.S. intelligence apparatus and are building capabilities to undermine long-term American investments in AI. A coordinated AI counterintelligence program using a directive, well-resourced, and cross-governmental body confronts this threat with the candor and vigor it deserves.

 

 

Brian Drake is the federal chief technology officer of Accrete.AI Government. He recently departed the Defense Intelligence Agency, where he served as a technology, counterintelligence, and counternarcotics analyst for over 10 years. As the Defense Intelligence Agency’s first director of AI, he managed a portfolio of over $20 million for AI research and development.

Image: IBM Research via Flickr user IBM Research Zurich