When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

Cogs of War
Cogs of War

How Washington Is Losing the AI Race No One Is Tracking

November 26, 2025
How Washington Is Losing the AI Race No One Is Tracking
Cogs of War

Cogs of War

How Washington Is Losing the AI Race No One Is Tracking

How Washington Is Losing the AI Race No One Is Tracking

Javaid Iqbal Sofi
November 26, 2025

Spain — a NATO ally and founding member of the European Union — chose a Chinese company to manage its law enforcement wiretap systems.

The €12.3 million contract awarded to Huawei allows the Chinese firm to process and store legally authorized surveillance data used by Spanish police and intelligence services, despite EU warnings issued as recently as September 2025.

The reason wasn’t technology. U.S. firms offer systems that are technically superior. The problem was regulatory alignment. Huawei designed its bid around E.U. compliance requirements while American vendors struggled to produce the documentation European procurement offices demanded.

This is not an isolated case. A Pew Research Center survey from spring 2025 reveals the underlying credibility gap: a median of 53 percent of adults across 25 countries trust the European Union to regulate AI, while only 37 percent trust the United States. This trust deficit is a measurable homeland security vulnerability, creating a “documentation gap” that is sabotaging U.S. exports and allied interoperability.

The U.S. government is rightly focused on accelerating domestic innovation. The White House’s July 2025 AI Action Plan and its accompanying Executive Order on American AI exports are designed to remove regulatory barriers and “turbocharge” innovation. This strategy, however, ignores a critical reality: America’s allies are not buying abstract innovation, they are buying legally compliant systems.

The Framework Mismatch

The U.S. domestic approach, built on the excellent but voluntary National Institute of Standards and Technology AI Risk Management Framework 1.0, is colliding with a mandatory, top-down model.

The European Union’s AI Act, now law, creates binding legal prerequisites for market access. The Act requires systems the European Union deems “high-risk” — a category covering most border control, defense, and critical infrastructure applications — to come with detailed technical documentation.
While the U.S. promotes deregulation, China is exploiting the resulting compliance vacuum.

Beijing’s Global AI Governance Action Plan, also released in July, explicitly positions regulatory compliance as a core export capability. This is not theoretical. In Serbia, Huawei’s “Safe City” surveillance project is actively expanding as of October 2025, with leaked documents revealing orders for equipment supporting 3,500 additional cameras.

The system includes AI-powered facial recognition — a technology where U.S. firms hold technical advantages but cannot compete on the regulatory terms European procurement offices now require.

As Morgan Plummer recently argued, early AI policy decisions tend to harden quickly — what begins as documentation requirements today becomes strategic lock-in tomorrow.

Washington has proven it can move with urgency on China-focused technology restrictions. Treasury’s Outbound Investment Security Program went from executive order in August 2023 to full implementation in January 2025. The Export Control Reform Act gives Commerce sweeping authority to regulate emerging technologies, including AI systems, with documentation and transparency requirements — authority it exercised through January 2025 rules restricting advanced AI chip exports globally.

Congress, too, has acted. The CHIPS and Science Act, enacted in 2022, directs the National Institute of Standards and Technology to develop technical standards and guidelines for trustworthy AI systems. The FY 2025 National Defense Authorization Act, passed in December 2024, establishes a working group under Section 1087 to develop and coordinate AI initiatives among U.S. allies and partners, including identifying solutions to advance interoperability of AI systems and allied standards.

Bipartisan Senate and House AI working groups have called for advancing U.S. competitiveness in AI standards. The proposed Global Technology Leadership Act would establish an Office of Global Competition Analysis to track U.S. AI competitiveness, though versions have stalled in committee since June 2023. The proposed Outbound Investment Transparency Act would further tighten oversight of technology flows to strategic competitors.

The U.S.-E.U. Trade and Technology Council has produced coordination mechanisms. Working Group 1 on technology standards and Working Group 5 on data governance have developed a Joint Roadmap on Trustworthy AI that explicitly addresses conformity assessment, measurement tools, and risk management frameworks.

Yet these efforts share a critical blind spot. They restrict U.S. firms, fund domestic research, and coordinate with allies — but they don’t convert innovation into the documentation European procurement offices require. The National Defense Authorization Act provisions cover military systems; Spain’s Interior Ministry contracted with Huawei using European Union civilian standards. The Trade and Technology Council roadmaps exist; U.S. vendors still lack practical certification pathways. Commerce has authority under the Export Control Reform Act to require AI documentation; it has used this for export restrictions, not allied procurement alignment.

The Homeland Security Stakes

This vulnerability creates a direct risk to U.S. homeland security. The Department of Homeland Security reported 158 AI use cases in its 2024 inventory — representing a 136 percent increase from the previous year — and the agency is moving to rapidly deploy more systems that should integrate with allied counterparts.

The Department of Homeland Security already possesses relevant authorities. The department issues Binding Operational Directives under the Federal Information Security Modernization Act that compel federal agencies to adopt specific cybersecurity and technology practices. It operates under presidential directives requiring coordination with international partners to strengthen critical infrastructure security and resilience. These authorities haven’t been deployed to address allied AI procurement documentation gaps.

When the Department of Homeland Security operates AI systems that coordinate with European border agencies, those systems should satisfy E.U. legal obligations. This is the blind spot. If an American AI-powered cargo scanner is algorithmically superior but lacks the required legal paperwork, an allied port authority — like in Spain — cannot legally integrate it.

Strategic Lock-In and Alliance Costs

The common argument in Washington is that superior U.S. innovation will eventually win. This ignores a hard procurement reality: You cannot win a contract you are not legally allowed to compete for. By the time a U.S. firm retrofits compliance, an allied nation may have already started a 15-to-20-year procurement cycle with a compliant competitor.

This challenge extends beyond procurement. As Andrew Hill and Dustin Blair recently explored, military leaders should develop trust in AI systems whose reasoning they cannot fully understand. The regulatory credibility gap compounds this challenge — allies may trust opaque systems from vendors demonstrating comprehensive governance frameworks more readily than superior algorithms from sources lacking regulatory credibility.

Allied procurement teams now evaluate regulatory competence as a core criterion. As NATO advances its AI strategy and capabilities, vendors lacking conformity documentation face mounting integration barriers. This is how the trust gap becomes a strategic crisis: superior American technology gets locked out, not by adversaries, but by the administrative hurdles of our allies.

Implementation: From Policy to Procurement

Washington doesn’t need to reinvent its approach: It needs to translate its domestic policy into a coherent export strategy. Three concrete actions can address this gap.

First, the under secretary of defense for acquisition and sustainment should issue a Class Deviation — a mechanism allowing defense agencies to bypass normal procurement rules during urgent policy changes requiring AI-enabled systems intended for allied integration to include Regulatory Interoperability Plans as initial evaluation criteria. This would signal to the defense industrial base that regulatory interoperability is a core requirement while formal rulemaking proceeds.

Second, the secretary of homeland security should establish a Department of Homeland-European AI Office working group by the second quarter of 2026. Staffed with compliance experts from U.S. Customs and Border Protection and the European Commission’s European AI Office, this group would publish joint templates for high-risk AI technical documentation and pilot mutual-recognition pathways for shared security systems.

Third, the Commerce Department should launch an AI regulatory passport program by early 2026, providing standardized certification that American AI systems meet both domestic standards from the National Institute of Standards and Technology and allied regulatory requirements. The department already operates the American AI Exports Program launched in October 2025, creating the infrastructure for this certification pathway.

Allied procurement decisions are not waiting. Competitors are systematically investing in compliance as a capability. Washington can decide whether it will treat administrative documentation as the strategic capability it has become.

If not, America’s superior algorithms will remain on the sidelines, excluded from the allied systems they were designed to support.

 

Javaid Iqbal Sofi is an AI governance researcher and policy consultant who has advised international organizations on regulatory frameworks. He can be reached at Javaidiq@vt.edu

Image: Midjourney

Become an Insider

Subscribe to Cogs of War for sharp analysis and grounded insights from technologists, builders, and policymakers.