When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

Cogs of War
Cogs of War

Biodefense Blind Spot: Why Washington Confuses Pandemics with Bioweapons

February 10, 2026
Biodefense Blind Spot: Why Washington Confuses Pandemics with Bioweapons
Cogs of War

Cogs of War

Biodefense Blind Spot: Why Washington Confuses Pandemics with Bioweapons

Biodefense Blind Spot: Why Washington Confuses Pandemics with Bioweapons

Junaid Nabi
February 10, 2026

The next great biological threat may not begin in a wet market, a jungle, or a laboratory accident. It may begin on a laptop with a commercially available AI model.

In October 2025, AI researchers from Microsoft reported that generative AI tools could design dangerous proteins capable of evading biosecurity and, alarmingly, could slip past the screening systems used by DNA manufacturers.

In February 2025, researchers at Arc Institute released Evo 2, an AI model trained on 128,000 genomes that can design entirely new organisms. The model achieves 90 percent accuracy in predicting which genetic mutations cause disease. Within weeks of its predecessor’s release in 2024, users had circumvented biosecurity safeguards by adding back viral data that developers had deliberately excluded. The same month, Anthropic reported its Claude Opus 4 model could significantly enhance the ability of novices to plan bioweapon production, triggering the company’s highest security protocols for the first time. OpenAI’s o3 model, released in April 2025, can assist experts in effectively planning operations for reproducing a recognized biological threat.

These are not speculative futures. They are the current capabilities of commercially available systems. Yet, American biodefense strategy remains organized around a unified “all hazards” biodefense strategy — one that treats natural and intentional biological threats as related challenges, served by common capabilities. This made strategic sense when pathogen threats, natural or weaponized, followed similar evolutionary constraints. AI-synthetic biology convergence breaks this logic.

Adversaries can now computationally design threats that specifically evade surveillance systems optimized for natural pathogens, defeat stockpiled therapeutics, and exploit the dual-use assumptions that made this integrated biodefense efficient. The question is not whether integration was wrong — it was right for its threat environment. The question is whether that environment still exists.

Strategy Without Integration

Strategic coherence requires linking tactical actions to political objectives through an integrated theory of victory. Yet modern biodefense exhibits precisely what strategists warn against: fragmented institutions optimizing for narrow mandates without overarching purpose. An illustration of this fragmentation is the 2022 National Biodefense Strategy that lists five goals spanning pandemic preparedness, biological weapons defense, agricultural biosecurity, and laboratory safety, cascading across fifteen federal agencies with overlapping authorities. The Department of Health and Human Services leads pandemic response. The Defense Department handles bioweapon threats. The Agriculture Department protects crops and livestock. No single entity integrates these missions into a coherent strategy, distinguishing between threats requiring fundamentally different responses.

This matters because the analysis by Dr. Michelle Bentley of biological weapons being “taboo” — the visceral revulsion and norm constraints that have prevented state use of bioweapons despite possession — remains partially valid while simultaneously becoming outdated in critical ways. The core insight of the analysis holds: bioweapons inspire disgust, carry boomerang risks that deter rational actors, and remain constrained by the Biological Weapons Convention’s international norms. Even the Soviet Union’s massive Biopreparat program never deployed weapons operationally. More recently, COVID-19 also strengthened the taboo by demonstrating disease’s devastating societal impact.

Where the framework has become obsolete is in three specific domains transformed by AI-synthetic biology convergence since 2022. First, the knowledge threshold for bioweapon development has collapsed in ways that bypass state-level constraints the taboo regulates. Second, the barrier between computational design and physical synthesis has weakened as automation removes human judgment from the workflow. Third, the assumption that technical difficulty constrains proliferation no longer holds when AI tools embody decades of elite virological expertise accessible to non-state actors who reject taboo norms entirely.

The Capability Shift: What Has Changed

Between December 2024 and May 2025, multiple AI companies that build large language models crossed internal risk thresholds for biological threats. Anthropic’s testing found that Claude Opus 4 presented “significantly greater” performance than Google search, or previous models at advising novices on bioweapon production. The company’s chief scientist noted: “You could try to synthesize something like COVID or a more dangerous version of the flu — and basically, our modeling suggests that this might be possible.” Anthropic deployed its strictest safety protocols, acknowledging it “can’t rule out” the risk of “uplifting a novice terrorist, someone like Timothy McVeigh, to be able to make a weapon much more destructive than would otherwise be possible.”

In June 2025, OpenAI reported that upcoming models would reach “high” capability levels in biology, with its April 2025 o3 release already helping experts plan biological threats. A December 2024 study by SecureBio, Massachusetts Institute of Technology, and the Center for AI Safety found that o3 outperformed 94 percent of expert virologists on troubleshooting complex lab protocols. These are measured capabilities on concrete technical tasks, not projections.

However, critical limits remain. Current biological design tools cannot yet create pandemic-capable pathogens from scratch. A March 2025 National Academies report concluded that systems “still lack the requisite understanding of how complex biological systems interact” to design self-replicating pathogens de novo. Evo 2’s designed bacterial genome “was missing some critical elements and so would likely not function if synthesized,” according to Arc Institute researchers. The primary bottleneck is insufficient viral training data: Developers excluded eukaryotic viruses as a biosecurity measure, which ironically limits pathogen design capability.

Experts disagree sharply on timelines. Anthropic CEO Dario Amodei predicted in 2023 that AI could offer step-by-step instructions for designing lethal pathogens within two to three years, a timeline the company reiterated in January 2025. An August 2024 study by Johns Hopkins and Stanford researchers argued that the key components needed to develop advanced biological models may already exist or will soon become available, emphasizing the need for immediate governance. The National Academies took a more conservative view in March 2025, suggesting current limitations preclude near-term pandemic pathogen design.

What is indisputable is that synthesis screening systems are failing against current AI capabilities. A recent Science study demonstrated that AI protein design tools can generate variants of known toxins that could bypass commercial screening software. Researchers created 76,000 blueprints for 72 harmful proteins, including ricin and botulinum toxin. Testing revealed that several software systems for detecting toxins, including those used by major nucleic acid suppliers, failed to reliably identify AI-reformulated variants, allowing up to 100 percent of certain protein variants to go undetected. While updated AI detection methods caught 97 percent of these variants, researchers cautioned that sequence-based biosecurity alone will not be sufficient, as future AI-assisted protein generation may produce entirely novel sequences.

This is not a hypothetical risk. It is the measured failure of existing defenses against currently available tools.

The Institutional Muddle

American biodefense policy systematically conflates two distinct challenges: preparing for naturally occurring pandemics and defending against deliberately engineered biological agents. This confusion appears in budget allocations, organizational mandates, and strategic doctrine.

The conflation starts at the conceptual level. For instance, the 2022 National Biodefense Strategy treats “biological threats — whether naturally occurring, accidental, or deliberate in origin” as a unified category requiring a coordinated response. The framework sets audacious targets, including developing vaccines within 100 days, deploying pathogen-specific tests within 30 days, and repurposing therapeutics within 90 days. These capabilities assume natural pathogen evolution, not adversaries using AI tools to design agents that specifically evade detection systems or defeat therapeutic interventions.

Also, integration of these concepts leads to misallocation. A biodefense model that combines these hazards, while logical for dual-use capabilities, fails against the AI-synthetic biology convergence at specific, measurable points. Take surveillance: the Centers for Disease Control and Prevention’s $500M+ pathogen genomic surveillance is optimized for detecting natural variants through clinical sampling. However, it cannot detect AI-designed sequences with no homology to known organisms. Similarly, pandemic-focused investments, such as hospital-based surveillance programs or an almost $10B expansion of the Strategic National Stockpile (stocking predictable broad-spectrum antivirals), crowd out funding for critical biodefense capabilities such as unusual equipment purchase tracking, synthesis-order monitoring, or forensic genomics for attribution. This integrated approach to biodefense prioritizes epidemiological investigation (contact tracing) but ignores supply-chain monitoring needed to catch deliberate attacks. This is not an overlap — it’s a systemic gap where pandemic priorities actively undermine necessary biodefense investment.

Consider three critical distinctions the current framework obscures:

Detection Architecture

Natural pandemics typically emerge from zoonotic spillover at predictable ecological interfaces. The Centers for Disease Control and Prevention’s genomic surveillance network monitors clinical samples for variants of known pathogen families following natural evolutionary patterns. This system cannot detect AI-designed synthetic pathogens with genetic sequences bearing no homology to known organisms, specifically engineered to evade genomic surveillance. Detection would require fundamentally different sensor networks monitoring DNA synthesis orders, laboratory equipment sales, and research publication patterns — capabilities that fall outside the Centers for Disease Control and Prevention’s pandemic surveillance mandate and receive virtually no federal investment.

Medical Countermeasures

Pandemic response emphasizes rapid vaccine platforms, therapeutic manufacturing scale-up, and broad-spectrum antiviral stockpiles. Operation Warp Speed invested $18 billion to expedite COVID-19 vaccine production, while the Strategic National Stockpile holds medical equipment and pharmaceuticals for outbreak response. This strategy assumes that adversaries have not tailored viral proteins to evade existing vaccines or engineered resistance to therapeutics. The 2023 Department of Health and Human Services Screening Framework recognizes that advancements in biotechnology could enable the creation of novel pathogenic proteins but offers no specific measures to address this threat, instead encouraging synthesis providers to establish their own best practices.

Attribution and Deterrence

Natural pandemics require no attribution. Response focuses on public health measures, not adversary intentions. Deliberately released biological weapons demand attribution to enable proportional response and deterrence. Yet, the 2022 National Biodefense Strategy contains no discussion of attribution capabilities, despite acknowledging state bioweapons programs in Russia, North Korea, China, and Iran.

This institutional confusion produces three concrete policy failures:

Wrong Investments

Fiscal year 2024 allocated approximately $20 billion for pandemic preparedness across Health and Human Services, the Centers for Disease Control and Prevention, and the National Institutes of Health, focused on vaccine platforms, clinical trial infrastructure, and disease surveillance. By contrast, the National Institute of Standards and Technology’s Center for AI Standards and Innovation, which is the primary federal entity evaluating frontier AI models for biological risks, faces a proposed 35 percent budget cut in fiscal year 2026. The Centers for Disease Control and Prevention’s Public Health Emergency Preparedness received $735 million, less than 3 percent of pandemic preparedness spending. This allocation makes sense if biological threats are primarily natural pandemics. It makes no sense if the real danger lies in adversaries using AI tools to engineer pathogens that pandemic response systems cannot handle.

Missed Opportunities

No federal program systematically evaluates biological design tools for dual-use risks before public release. The Trump administration’s July 2025 AI Action Plan directs the National Institute of Standards and Technology to evaluate frontier AI models for emerging high-impact biological capabilities (also called Biological Development Technologies) but provides no funding, no testing protocols, and no enforcement mechanism. Arc Institute released Evo 2 as open source without any requirement for pre-deployment government review, despite the model’s ability to design novel genomes. Google DeepMind similarly released AlphaFold 3 with voluntary biosecurity consultations but no regulatory oversight. The default is unregulated proliferation of increasingly powerful biodesign tools, while pandemic-focused agencies lack mandate, or expertise, to intervene.

Dangerous Blind Spots

DNA synthesis screening — the most important choke point for preventing bioweapon production — remains voluntary, fragmented, and inadequate. The 2023 Department of Health and Human Services Screening Framework encourages synthesis providers to screen orders against the 63-agent select agents list, but imposes no legal obligation. Only recipients of federal research funding must use screened providers, leaving commercial customers free to shop internationally. No mechanism exists to share data on suspicious orders across companies. The International Gene Synthesis Consortium, representing 80 percent of global capacity, operates through voluntary information sharing with no verification, or enforcement. This is not a system designed to stop adversaries from acquiring AI-designed bioweapons. It is a patchwork inherited from pandemic biosafety protocols, inadequate by design for the emerging threat environment.

What Must Change

Addressing AI-synthetic biology convergence while distinguishing it from pandemic preparedness requires institutional reforms, not incremental budget adjustments.

Four policy actions should begin immediately:

First, consolidate fragmented biodefense by reforming existing entities. Although the Department of Homeland Security already operates the National Biosurveillance Integration Center and Countering Weapons of Mass Destruction Office, both lack authority and capacity for AI-synthetic biology threats. The National Biosurveillance Integration Center’s statute (6 USC §195b) focuses on early warning for “biological events of national concern” with no authority for pre-deployment technology assessment. The Countering Weapons of Mass Destruction Office emphasizes operational response, not computational genomics expertise. Given the current administration’s focus on reform, AI-biosecurity mission can be split between two existing entities with complementary strengths. One, authorize the Defense Advanced Research Projects Agency to lead pre-deployment evaluation of biological design tools above computational thresholds, leveraging existing AI expertise. Next, give the Countering Weapons of Mass Destruction Office authority for mandatory synthesis screening enforcement, building on its operational mission. Two, fund through consolidation: the Government Accountability Office repeatedly identifies biodefense as fragmented across fifteen agencies. This is national security consolidation during an efficiency-focused administration, not regulatory expansion.

Second, the president should issue an executive order establishing an AI-enabled DNA synthesis screening infrastructure that incentivizes industry adoption. The order should direct the National Institute of Standards and Technology to develop a federal screening application programming interface (a software platform) that synthesis providers query for real-time risk assessments using AI-powered functional detection, not just sequence matching. There are powerful adoption incentives for this policy: federal procurement requirements for certified providers, liability protection from biosecurity lawsuits, expedited FDA review, and public certification. Providers declining participation must disclose this to customers and report high-risk orders to the Department of Homeland Security, creating market pressure. This leverages existing executive authorities — procurement rules, liability frameworks, and regulatory review — to achieve industry-wide adoption within 12 to 18 months. This is industrial policy building security infrastructure that enables AI-driven biological innovation while demonstrating responsible leadership — a competitive advantage over China’s less regulated approach.

Third, require pre-deployment security evaluation of biological design tools above specified computational thresholds. Any AI model trained on biological data that requires a massive amount of computing power — more than the power used for systems like Evo 2 and AlphaFold 3 — should undergo evaluation by the Center for AI Standards and Innovation before public release. Evaluation would assess: (1) the capability to design novel pathogens, (2) ease of circumventing built-in safeguards, and (3) the potential for misuse by actors with limited technical expertise. The evaluation would not block release, but require developers to implement specified safeguards (training data exclusions, query filtering, usage logging), and report attempted misuse. This protocol would mirror the E.U. AI Act’s approach to high-risk systems, adapted for biological threats. Academic researchers releasing nonprofit models could receive federally funded evaluation support to avoid penalizing open science.

Fourth, fund the National Institute of Standards and Technology and the Center for AI Standards and Innovation at levels proportionate to the threat scale. Proposed budget cuts to the National Institute of Standards and Technology contradict the administration’s biosecurity priorities articulated in the July 2025 AI Action Plan. Congress should heed recommendations from experts, like the Johns Hopkins Center for Health Security, which advocates for at least $8-$10 million in annual funding for the National Institute of Standards and Technology’s AI safety branches (such as CAISI). This funding is vital for AI-biosecurity initiatives such as developing standardized benchmarks, conducting red-teaming exercises on bioweapon pathways, and enhancing international coordination through the AI Safety Institute network. These tasks should not be left to AI companies’ internal safety teams, which are under commercial pressure to release products quickly.

The Window Is Closing

The convergence of AI and synthetic biology presents an immediate threat, with commercial AI systems capable of expert-level bioweapon development and evading DNA screening. However, American biodefense strategy conflates this deliberate biological threat with natural pandemic preparedness, creating dangerous blind spots and prioritizing the wrong investments.

The proposed reforms face political obstacles in a deregulation-focused administration, but they consolidate fragmented efforts rather than expand government. Fifteen agencies with overlapping biodefense authorities create redundancy, not strength. The voluntary synthesis screening framework aligns with deregulatory philosophy but demonstrably fails — the October 2025 Science study proves that systems miss AI-generated threats. Mandatory screening applies narrowly to synthesis providers and frontier AI developers, not biotech research broadly. This transcends regulatory philosophy: whether voluntary or mandatory matters less than whether it works. Even deregulation-focused administrations fund defense priorities when adversaries threaten national security. AI-augmented bioweapons demand that recognition. The time to act is not after the next penetration test exposes our vulnerabilities. The test is happening now, and the current defenses are failing.

 

Junaid Nabi, M.D., MPH, is a physician-scientist advancing digital health strategy, healthcare reform, and national security. His latest research exposed critical cybersecurity vulnerabilities in AI-enabled remote monitoring systems. He serves as a senior fellow at the Aspen Institute and a Millennium fellow at the Atlantic Council. Connect with him on X: JunaidNabiMD.

**Please note, as a matter of house style, War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.

Image: Midjourney

Become an Insider

Subscribe to Cogs of War for sharp analysis and grounded insights from technologists, builders, and policymakers.