Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.
The race to develop artificial general intelligence is accelerating, but America’s approach to securing it remains dangerously inadequate. While Washington celebrates its new “AI Action Plan,” which champions a light-touch regulatory model to foster innovation, Chinese intelligence services are very likely targeting American AI labs with sophisticated espionage operations. This official embrace of minimal oversight ignores a sobering reality: The country’s most advanced AI research — the very technology that will define the next century of global power — remains critically vulnerable to theft and sabotage.
The math is sobering: It takes years to build secure data centers, establish protected supply chains, and implement the kind of military-grade safeguards that could withstand determined nation-state actors. Meanwhile, leading AI models are advancing toward human-level capabilities on timelines measured in months, not years. Every day we delay implementing serious security measures is another day that critical AI research remains vulnerable to theft, sabotage, or worse.
Advocates of a light-touch approach — labs and their lobbyists, innovation-focused policymakers, and some conservatives — argue that firms already have strong incentives to act responsibly. There’s no need for onerous regulations, they argue, when companies already want to avoid Chinese domination and technical catastrophes (e.g., misaligned superintelligence). Pair that with export controls to slow China’s progress, and you have the makings of a winning formula: Let the United States innovate faster and keep the most powerful systems out of Beijing’s hands.
Anthropic CEO Dario Amodei recently proposed a compromise: mandatory transparency requirements that would force labs to disclose safety evaluations and mitigation plans, while preserving their freedom to innovate. A federal standard would be an easy fix, he says, because it would “codify what many major developers are already doing.” It’s a reasonable middle ground for a politically constrained moment. But transparency alone cannot solve the fundamental problem that AI labs developing potentially superintelligent systems are still operating like commercial tech companies when they should be treated like strategic national assets. In this race, half-measures are a formula for strategic failure.
Amodei is right to call for disclosure and oversight, but his proposal rests on a mistaken assumption: that transparency alone can manage threats from AI systems that may one day exceed human intelligence. As someone who has written extensively about the harms of excessive government secrecy, I value transparency deeply. But I also understand its limits as a solution to the urgent national security risks we face. Those include the control and alignment, and malicious actor threats Amodei presents as examples of AI dangers. At least as important is the very real possibility that China will develop advanced artificial general intelligence before the United States. That would let Beijing achieve military and economic superiority, giving it a strategic monopoly on global power. It might do that more or less on its own, or it might achieve this through espionage and/or sabotage. We have extensive evidence showing persistent Chinese efforts to steal intellectual property, including from leading tech firms building frontier AI models.
As things stand now, the leading AI labs “are the security equivalent of swiss cheese.” Gladstone AI’s April 2025 report, written with extraordinary inside access likely due to its relationship with the federal government, documents significant vulnerabilities at every level of model development: Attacks that could paralyze data centers for less than $20,000; Chinese parts providing back-door access and sabotage opportunities, without alternative options because of China’s dominance of the hardware supply chain; Chinese human and signals intelligence capabilities, which probably already provide access to critically important intellectual property, including model weights and architectures.
One example from the report describes “an attack that allows hackers to reconstruct the architecture of a small AI model using nothing but the power consumption profile of the hardware that runs it.” Much stronger “information extraction attacks” using “electromagnetic, sound, or vibrational signals” are also available. Beyond the strategic nightmare of China achieving an advanced artificial general intelligence monopoly, the consequences of such a security breach could be immediate and catastrophic for Americans, potentially targeting everything from financial markets to critical infrastructure. While China remains America’s primary competitor for AI and other domains, Russia’s highly capable intelligence services could also steal secrets and wreak havoc to get ahead.
As Amodei has publicly stated, the threat of Chinese industrial espionage is a primary concern for leading AI labs. This is not a distant threat — it is an active siege. For years, the FBI has been sounding the alarm, with former Director Chris Wray warning that China’s campaign of theft is “more brazen, more damaging than ever before,” forcing the bureau to open a new China-related counterintelligence investigation “every 12 hours.” While federal authorities have achieved notable successes — such as the recent indictment of a Chinese national for an alleged plot to steal proprietary AI technology from Google — these actions are fundamentally reactive. They reveal a strategy of catching spies after they’ve already penetrated the gates, which is inadequate when the goal ought to be to prevent the theft of nation-defining technology in the first place.
Amodei’s proposal might be politically viable in the short term. With regulation-wary Republicans in control of the White House and Congress, the notion of limited transparency requirements comes across as a reasonable compromise, “the best way to balance the considerations in play.” But transparency can’t protect AI labs from Chinese espionage and sabotage. Labs working toward advanced artificial general intelligence are not just commercial entities, like pharmaceutical firms, where disclosure and product safety are the primary regulatory goals. They are more like private nuclear facilities or bioweapons labs, sites of strategic national importance. Disclosure standards and post-hoc oversight are nowhere near enough. The problem isn’t just that AI labs are insecure. It’s that they are treated as commercial ventures when they are already operating as strategic sites targeted by rival intelligence services. Light-touch, with or without transparency requirements, is fundamentally misaligned with national risk. Asking commercial companies to defend themselves against a determined, state-level adversary is a recipe for failure.
While a full “Manhattan Project-like program” may not be necessary, the current approach is untenable. What we need now is a tiered risk governance framework that distinguishes between levels of danger and scales regulatory demands accordingly. Low-risk models would remain unregulated, with minimal required public disclosure, perhaps enough to allow civil society monitoring. Intermediate-risk models could operate under a regime of mandatory transparency, safety evaluations, and state-enforced secrecy for particularly sensitive assets (e.g., model weights, novel algorithms and architectures). High-risk models would require something closer to military-grade governance. This includes not only technical safeguards like secure, government-audited data centers and a new classification system that treats models and the methods to build them as state secrets, but also rigorous personnel security protocols. Personnel would require not just federal vetting and clearance, but also continuous security training, participation in insider threat awareness programs, and cultivation of a security-first culture. Thresholds would be based on factors such as autonomous decision-making, strategic planning capabilities, goal preservation under adversarial conditions, and dual-use potential. To make the distinctions, the White House should convene a task force composed of lab executives, independent computer scientists, and national security professionals from the intelligence community, the Department of Defense, the Department of Energy, and the Cybersecurity and Infrastructure Security Agency.
Moving from light-touch to tiered risk governance will face political resistance, especially in the current environment. However, the national security framing may attract enough support from defense-minded lawmakers to make progress possible, particularly since Congress is already grappling with the scale of this threat in hearings on China’s systematic theft of U.S. technology, including advanced AI. Crucially, this approach would reframe security spending not as a regulatory burden, but as a strategic co-investment by the government, strengthening the leading AI labs financially through federal partnerships while leaving the vast majority of AI development unregulated
These are not radical proposals. We already treat nuclear facilities and cyberweapons with this level of precaution. The strategic stakes of advanced AI are no less serious, and the time to act is now. The proposed bipartisan Advanced AI Security Readiness Act is a critical first step. While the bill rightly tasks the NSA’s AI Security Center with designing an “AI Security Playbook to address vulnerabilities, threat detection, cyber and physical security strategies, and contingency plans for highly sensitive AI systems,” its success will depend on cooperation with FBI’s Counterintelligence Division, which is responsible for stopping spies targeting labs on U.S. soil. Passing this bill would be a critical down payment on the robust security framework America needs.
Jason Ross Arnold is professor and chair of political science at Virginia Commonwealth University, with an affiliated appointment in the Computer Science Department. He is the author of Secrecy in the Sunshine Era: The Promise and Failures of U.S. Open Government Laws (2014), Whistleblowers, Leakers, and Their Networks, from Snowden to Samizdat (2019), and Uncertain Threats: The FBI, the New Left, and Cold War Intelligence (forthcoming, 2025).
Image: Midjourney