When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

Cogs of War
Cogs of War

Stove-Piped Systems Will Strangle Advanced AI in the Cradle

November 20, 2025
Stove-Piped Systems Will Strangle Advanced AI in the Cradle
Cogs of War

Cogs of War

Stove-Piped Systems Will Strangle Advanced AI in the Cradle

Stove-Piped Systems Will Strangle Advanced AI in the Cradle

Todd Krokowski
November 20, 2025

Imagine deploying groundbreaking AI technologies — like machine learning and large language models — within defense organizations, only to confine them in isolated, stove-piped systems with minimal interoperability. These closed systems, often reliant on proprietary data and shrouded in opaque functionality, limit the true potential of AI. While these applications can tackle individual tasks effectively, their inability to seamlessly interact and share data prevents us from harnessing their collective power to address more complex, higher-order challenges. Without a shift towards open standards and protocols, the full promise of AI for the U.S. military will remain untapped, leaving us vulnerable and unprepared for the future.

An AI agent is a software entity that performs tasks autonomously, using artificial intelligence to make decisions, learn from experiences, and interact with its environment. These agents can range from simple bots to exquisite systems capable of complex problem-solving. However, to realize their full potential, AI agents must be able to communicate and collaborate with one another. Open standards and protocols are crucial for this interoperability, enabling different AI agents to share data, coordinate actions, and collectively solve problems that are beyond the capacity of any single agent. Moreover, open standards provide transparency for these complex systems, much like the transparency needed in an AI algorithm to build trust. Without these shared frameworks, both the effectiveness and trustworthiness of AI in delivering comprehensive solutions remain severely limited.

A look at how AI agents can help defeat drone swarms provides an example.

Say that in the near future, war has suddenly broken out between the United States and China. At a military installation on the first island chain, U.S. forces are preparing to launch precision fires against enemy air, sea, and land-based targets.

At the same time, however, swarms of the adversary’s aerial and seagoing drones — many fully under control of a merciless AI — are racing toward the base from multiple directions. The security personnel defending against the drone swarms are equipped with the Tactical Assault Kit application on their mobile devices, giving them a shared common operating picture and enabling users to determine what is going on around them quickly and efficiently. This is an issue near and dear to me as one of the original developers of this kit for the military and a leader at Booz Allen developing its next-generation capabilities. The Tactical Assault Kit allows non-technical users to not only possess enhanced situational awareness but also to perform command and control of connected systems. connected systems.

However, with so many AI-controlled enemy drones racing in, the U.S. forces on the island will require their own AI to provide the speed needed to fight fire with fire. And AI agents can be particularly valuable here.

AI agents don’t just analyze information; they have the ability to work toward commanders’ goals, such as defeating a drone swarm. And in doing so, they can direct, or “orchestrate” other AI. For example, they might string together machine learning models to look for patterns in sensor data that would suggest a drone swarm attack is imminent, even when the drones are still far out to sea. They might use large language models to predict the swarms’ armaments and tactics, including how they will likely try to evade the base’s drone defenses.

The AI agents — laser-focused on defeating the drone swarms — would bring all this information together on base defenders’ Tactical Assault Kit devices. The AI agents could present a common operating picture and recommend courses of action. Working with other Tactical Assault Kit users, they could help coordinate joint fires, including by prioritizing targets and identifying who is in the best position to shoot them, and with what weapons. This type of defense is highly kinetic and requires a custom, layered approach, built with a system-of-systems mentality, leveraging diverse solutions from various vendors to effectively mitigate incoming threats.

However, the AI agents can’t leverage a bespoke layered defense if they can’t talk to the other AI.

In a closed, proprietary system, a group of machine learning models might be able to identify patterns in sensor data to identify a drone swarm. But they may not be able to share that with other AI for a coordinated defense — it would be up to operators to manually connect that information with the outputs of other AI and present options to commanders. In a closed system, a large language model might be able to predict swarm tactics. But again, operators would likely have to figure out, on their own, how that information could relate to other AI insights.

But they may not have the time.

A drone swarm attack by a peer adversary would likely be guided by sophisticated AI agents, possibly autonomously. Base personnel would need their own AI agents for a rapid, coordinated defense. And to do that, they would need full interoperability between the base’s systems.

Defense organizations can achieve this AI interoperability by connecting their current systems to open rather than closed platforms. This will allow a free flow of information between systems, making it possible for an organization’s AI to work as a whole — including through AI agents — toward commanders’ mission goals.

With open platforms and standards, organizations can seamlessly plug in and connect new technologies to stay ahead of sophisticated adversaries. And they can bring emerging forms of AI to systems such as Tactical Assault Kit to help win the fight.

Once defense organizations have achieved AI-interoperability, they will be able to use advanced AI in new ways they might not have envisioned. For example, AI agents can turbo-charge capabilities such as persistent targeting, real-time combat adaptation, unmanned joint fires, emissions control, training, and readiness.

Advanced AI for the warfighter is the future — and the present. Closed, stove-pipe systems might have been sufficient in the past. That is no longer the case.

 

Todd Krokowski is a distinguished technology executive with nearly two decades of experience leading innovation at the intersection of national security, emerging technology, and commercial product strategy, and one of the key creators of the Tactical Assault Kit. As a director at Booz Allen Hamilton in its Defense Technology Group, he guides the development of next-generation software and hardware solutions for the U.S. military.

Image: Midjourney

Become an Insider

Subscribe to Cogs of War for sharp analysis and grounded insights from technologists, builders, and policymakers.