When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

Chasing True AI Autonomy: From Legacy Mindsets to Battlefield Dominance

December 15, 2025
Chasing True AI Autonomy: From Legacy Mindsets to Battlefield Dominance
Chasing True AI Autonomy: From Legacy Mindsets to Battlefield Dominance

Chasing True AI Autonomy: From Legacy Mindsets to Battlefield Dominance

Vitaliy Goncharuk
December 15, 2025

Western militaries are still arguing over what “autonomy” means while Russia and China are already building machines that don’t need GPS, data links, or even instructions.

And unless the United States rewrites its understanding of autonomy, it will keep fielding systems that look modern on paper but collapse the moment the battlefield cuts the cord. The United States and its allies should abandon legacy concepts of “autonomy” and rapidly transition toward true AI autonomy — systems capable of independently perceiving, deciding, and acting in contested environments where GPS, external data, and human supervision cannot be relied on. Without this shift, Western militaries risk falling behind Russia and China, who are already fielding increasingly autonomous systems geared for electronic-warfare-intensive environments.

True AI autonomy needs to be defined and distinguished from today’s often-misleading marketing language that conflates remote control with intelligent independence. Recent developments in the Russo-Ukrainian War, namely in electronic warfare, manpower shortages, and the growing role of robotics in logistics and casualty evacuation, show why genuine autonomy is now strategically indispensable. Yet, there exist ecosystem barriers preventing rapid adoption, including data scarcity, insufficient testing infrastructure, limited open-source collaboration, regulatory obstacles, and brittle system architectures. There is an urgent need for architectural principles and long-term technology roadmaps to build systems that can evolve on 18–24-month cycles rather than decades.

U.S. and allied defense leaders should take certain steps toward achieving true AI autonomy, including creating shared datasets and national test ranges, demanding modular upgradeable architectures, opening pathways for civilian innovators, and establishing clear procurement signals that reward genuine autonomy rather than incremental pseudo-autonomous features.

 

 

What Is True AI Autonomy, and What Is Not?

The defense technology arena today is awash in terminology — “unmanned systems,” “autonomous systems,” “robotics,” “AI-enabled,” and so on — which often muddles understanding.

Decision-makers can easily get the wrong impression. For instance, an “unmanned” aerial system might conjure images of a drone flying itself like the Terminator’s hunter-killer — when in reality it’s nothing more than a remotely piloted aircraft with a human crew guiding it through every step of the mission.

Let’s cut through the jargon. I will use a strict, no-compromise definition of true AI autonomy and avoid euphemisms like “unmanned” that mask the actual level of human control.

True AI autonomy defined here is a set of technologies inside a missile, drone, or robot (sensors and AI algorithms on board) that allow the system to carry out an entire mission on its own, without human instructions or reliance on external data sources, under the specific conditions where doing so gives a significant advantage over the enemy. In other words, an autonomous weapon or vehicle can perceive, decide, and act by itself, in real time, to accomplish a task that would normally require human guidance or GPS or remote sensors.

A true AI autonomy mission could be lethal (e.g. destroy a target), informational (e.g. conduct reconnaissance), or logistical (e.g. deliver supplies). In the next few years, we might expect that humans will still define the mission goals for these systems. For example, a commander tasks a drone to scout a certain area or a loitering munition to hunt for air defense radars in a defined zone.

But within five years, we should aspire to systems that can determine their own missions based on an understanding of the battlefield context. That might mean an unmanned submersible recognizing an enemy ship is nearby and choosing to shadow it, or a surveillance drone noticing an opportunity to jam the enemy’s communications and seizing the chance automatically.

Crucially, true autonomy cannot depend on external connectivity. If your “AI” drone needs a remote operator tele-operating it, or it requires a constant feed from GPS satellites or a distant radar to find targets, then it is not truly autonomous. It’s just a remote-controlled system with some fancy features.

By this strict definition, a lot of things marketed as “autonomous” today simply don’t qualify. And that is exactly the point – the defense community should not use such terminology.

Why AI Autonomy Is Crucial for Defense

Skeptics argue that AI-autonomous military systems won’t be effective in certain scenarios. For example, critics claim an AI-guided drone won’t work over the Pacific due to a lack of terrain features for navigation, or that autonomy fails in difficult conditions like twilight, fog, or heavy rain.

On the surface, a potential Indo-Pacific conflict — a vast maritime theater — might seem to favor traditional crewed platforms and long-range missiles over autonomous drones or robots. However, such arguments overlook several crucial factors, such as electronic warfare and GPS jamming threats, a shortage of human operators, and autonomy in logistics and casualty evacuation, etc.

Electronic Warfare and GPS Jamming Threats

Modern electronic warfare has exposed the vulnerability of weapons that rely on external signals like GPS. In Ukraine, Russian forces have used extensive jamming to disable or spoof GPS guidance, drastically reducing the effectiveness of Western precision munitions. This threat is not confined to Ukraine’s frontlines — it is proliferating worldwide. Portable jamming devices are becoming accessible to non-state actors, hobbyists, and terrorists. For instance, off-the-shelf or homemade gadgets that disrupt GPS signals can be acquired for a few dozen dollars. It is not hard to imagine a well-funded group using jammers to disable police bomb-disposal robots or crash surveillance drones. As autonomy expert Michael Horowitz observed, the war in Ukraine has “demonstrated the utility of AI-enabled weapons and their necessity” in environments where enemy electronic warfare can easily disrupt remotely piloted systems. In other words, if you can’t count on a stable data-link or GPS signal, the only option is to give the machine enough onboard intelligence to fend for itself.

Shortage of Human Operators

AI autonomy also addresses a very human constraint: the limited supply of skilled operators. Western allies have provided Ukraine with hundreds of thousands of drones – from small quadcopters to loitering munitions – but converting that influx into battlefield advantage requires large numbers of trained pilots. This has proven to be a significant bottleneck. A member of Ukraine’s National Guard drone unit (“Typhoon” unit) noted that there is a serious shortage of drone pilots, and training a new pilot from scratch to a basic level takes at least three months. Even once trained, human operators can only fly so many missions before fatigue or risk aversion becomes a factor. True autonomy offers a way to alleviate this bottleneck. Intelligent autonomous systems can allow one human to supervise multiple drones at once, or can carry out routine tasks without direct human control. Reducing the manpower burden is not just about efficiency — it could be decisive in a protracted war where both human and machine resources are strained.

Autonomy in Logistics and Casualty Evacuation

One of the most underappreciated benefits of AI autonomy lies in battlefield logistics and medical evacuation. In Ukraine, robotic ground vehicles are already starting to transform frontline resupply and casualty retrieval. Nearly 47 percent of Ukrainian unmanned ground vehicle missions to date have involved delivering supplies or evacuating wounded soldiers. These robots can venture into areas where any manned vehicle (or even a helicopter) would be immediately targeted by the enemy, thus getting critical aid to the injured faster and improving the chances of survival. However, current ground vehicles in Ukraine remain mostly tele-operated and have clear vulnerabilities. Units avoid operating these robots in daylight because the machines are easily spotted and destroyed by enemy first-person-view attack drones. Worst of all, if a remote-controlled evacuation ground vehicle loses its radio link or GPS signal, it can grind to a halt, potentially leaving a wounded soldier stranded in the line of fire. This is precisely where AI-driven autonomy can be a lifesaver. Similarly, autonomous supply convoys could press forward to isolated outposts without GPS, using onboard sensors and pre-loaded maps to find their way.

All these factors suggest that pursuing AI autonomy in defense is not a luxury or a niche endeavor, but a strategic imperative. The challenges to deploying reliable autonomous systems are multifaceted — spanning technical hurdles, organizational inertia, and ethical considerations — but the potential payoff is game-changing across the spectrum of military operations. Indeed, U.S. defense planners are preparing for conflict scenarios — such as in the Indo-Pacific — where communications may be heavily contested or denied, and they view autonomous weapon systems as increasingly critical under those conditions.

Challenges in the Ecosystem

If true AI autonomy is so critical, why haven’t the United States and its allies fielded it en masse yet? It’s not for lack of technological know-how — as mentioned, the pieces of the puzzle are largely known to experts. The reasons are largely ecosystem and institutional hurdles. The good news is these hurdles can be overcome with the right policies and investments. The bad news is that, so far, progress has been slow. Some of the major challenges include the following factors.

Data Scarcity and Silos

Modern AI, especially the machine-learning-based kind, feeds on data. Yet in the defense autonomy realm, there is a lack of high-quality, shareable datasets for training and validating AI models (e.g., image datasets of military targets, terrain data for navigation, simulation data from combat scenarios). Each new company ends up reinventing the wheel — collecting its own data or paying for access — which is time-consuming and costly. A more efficient approach would be to create large open or government-provided datasets (e.g., curated sensor data for different environments) that everyone can use as a baseline. The absence of this kind of data infrastructure is holding back progress.

Testing Ranges and Competitions

For companies developing autonomous navigation, the ability to conduct frequent real-world tests is essential. Ukrainian startups can do this today, refining their systems weekly and rapidly integrating battlefield feedback. U.S. companies, by contrast, have no such opportunities. They face limited access to realistic environments, heavy administrative requirements, and strict limits on how often live trials can be conducted.

A significant increase in new testing facilities and autonomy-focused competitions would dramatically expand opportunities for experimentation and remove one of the key barriers  to innovation in autonomous navigation.

Limited Open-Source Collaboration

A thriving open-source ecosystem is a key driver in fields like software and AI at large (think of Linux or TensorFlow). In defense autonomy, however, much development is siloed in classified or proprietary projects. There’s a natural need for secrecy in some areas, but many building blocks of autonomy (navigation algorithms, perception AI, etc.) wouldn’t reveal sensitive capabilities if shared. Encouraging open-source frameworks and libraries for autonomous systems (with appropriate security vetting) would prevent each player from having to start from scratch. It would also allow the broader tech community — including academics and startups — to contribute more easily to defense-relevant autonomy problems.

Regulatory and Airspace Restrictions

Until recently, stringent regulations such as Federal Aviation Administration rules made it hard to test and deploy autonomous drones domestically. The Trump administration has started to simplify some of these rules — for instance, moving toward allowing more beyond-visual-line-of-sight drone operations — but red tape remains a significant obstacle. Companies developing autonomous vehicles or drones often struggle to get permission for live trials, stunting innovation. On the military side, safety bureaucracy can similarly slow down field experimentation with autonomy. Policymakers need to establish regulatory frameworks that enable rapid testing and iteration without compromising safety.

The United States and its allies currently lack the supportive infrastructure to make developing autonomous systems as rapid and cost-effective as it could be. The Pentagon and allied defense ministries should invest in things like common datasets, open testbeds, and collaborative platforms – these are force multipliers that lower the barrier to entry for new innovators and reduce duplication of effort.

Smart Architecture Is Key

A striking feature of the drone and robotics revolution is its breakneck pace. The hardware and software state-of-the-art is leaping ahead on roughly 18–24-month cycles. A drone or AI algorithm that was cutting-edge in 2020 might be halfway to obsolete in 2023. This rapid evolution will continue for the foreseeable future and is likely to accelerate as more AI is integrated. For militaries used to procurement cycles measured in decades, this poses a serious dilemma. No military can afford to replace its entire inventory with new models every two years. So how do we stay technologically competitive without going broke or logistically insane?

The answer is smart, modular architectures. The U.S. defense industry should design autonomous systems (and really, all military systems) with upgradeability in mind from day one. This means using open standards and modular designs so sensors, computing hardware, and algorithms can be swapped out or upgraded in the field or at a maintenance depot, without junking the whole platform.

Another issue is the lack of public long-term roadmaps for autonomy. In the Cold War, the United States kept a close eye on Soviet capabilities and had clear technology roadmaps (many classified, of course) for where air combat or air defense was headed 10–15 years out. In autonomy, because it spans military and civilian tech and much is happening in the private sector, there’s less coherent forecasting. This makes it harder for engineers and architects to anticipate what threats or environments their systems will face a decade from now.

Department of Defense procurement leaders should therefore ensure equivalent flexibility in autonomous drones, unmanned ​​ground vehicles, and missiles. If these systems are built correctly, they can continuously evolve via module swaps and software updates, rather than requiring wholesale replacement. This is not only cost-effective but also strategically vital. It means adversaries can’t win simply by deploying a new jammer or trick, because friendly systems will be updated to counter it within months, not decades.

The Next Generation of AI Autonomy

Looking more than five years ahead, what will “AI autonomy” mean in practical terms? It’s not just today’s drones made a little bit better. We are talking about qualitative changes in capability that will redefine warfare, where there probably won’t be any meaningful difference between rockets, drones, and ground vehicles. Based on current trends, here are some key aspects of next-generation autonomy:

Mission Autonomy and Self-Directed Goals

Future autonomous systems won’t just execute missions — they will help define them. Instead of waiting for detailed human orders, a drone or robot team will interpret a commander’s intent and plan the best way to achieve it. For example, an autonomous surveillance network might on its own decide which areas to scout based on shifting enemy movements, without needing explicit tasking for each drone. This requires AI that understands context and priorities, doing some of the operational thinking that only humans do today.

New Sensor Integration

Next-generation autonomous platforms will be augmented with novel sensors beyond today’s cameras and radars. Think of hyperspectral imagers to detect chemical signatures or quantum magnetometers for GPS-free navigation. Many micro-sensors might feed data into an AI “brain” on the fly. The result is multi-modal sensor fusion far superior to today’s – autonomous systems that perceive the environment in wavelengths and details humans cannot, all processed in real time to inform their actions.

High-Speed, High-G Maneuvers

Autonomous weapons will operate at extreme speeds and accelerations, pushing physics and AI control to new limits. We could see AI-guided missiles and drones maneuvering at speed regimes no human pilot could endure or react to in time. Autonomy unlocks performance previously unthinkable for crewed systems. These platforms will blur the line between munition and aircraft, exploiting speeds and agility only machines can handle.

Counter-Robot Warfare

In future high-tech conflicts, the battlefield will include weapons designed not to target people, but to eliminate machines — an inherently more complex task. This will lead to the development of specialized hunter-killer drones, missiles, and early-warning systems built to detect and track unmanned aerial and ground platforms. We’re already seeing this shift in Ukraine, where compact interceptor drones are being used to hunt down and destroy enemy drones mid-flight. Looking forward, munitions will be tailored to neutralize small, inexpensive autonomous systems with greater efficiency and lower cost. Counter-robot warfare is emerging as its own distinct activity, demanding purpose-built detection and strike tools to stay ahead of adversaries that rely on unmanned technologies.

Next-Level Sensor Fusion and Networking

Tomorrow’s autonomous forces will operate as meshed networks, sharing data and decisions in real time across air, land, sea, and space. Each unit — whether drone, unmanned ground vehicle, or underwater vehicle — will act as a node in a larger intelligent web. This network won’t just share raw sensor data but will fuse insights, enabling cooperative decision-making. For example, a ground sensor might detect movement and automatically cue a drone to investigate or strike, with no human in the loop. Swarms will coordinate target allocation among themselves, and unmanned submarines could triangulate enemy contacts collaboratively.

These networks will also support dynamic environmental understanding. Instead of relying on stale or manually updated maps, autonomous platforms will constantly scan, map, and redistribute updated battlefield layouts. If a bridge is destroyed, a new obstacle appears, or terrain shifts, the network will propagate that change in seconds — giving all units an up-to-date common operating picture. This kind of distributed autonomy is far too complex for manual coordination. Only real-time AI orchestration can make such responsiveness possible.

The coming wave of AI autonomy is about autonomous systems that are smarter, faster, more coordinated, and more self-reliant than anything fielded to date. It’s a paradigm shift from tools that assist humans, to agents that actively pursue campaign objectives alongside humans. The side that masters and deploys them first will hold a significant warfighting advantage.

Engineering the Future of Dominance

Russia and China aren’t theorizing about future autonomy — they’re actively building it. Their top engineering minds are already iterating toward battlefield-ready AI autonomy, and they’re doing so using components and architectures that are commercially available today. The technologies required to gain a decisive edge — onboard sensing, real-time mapping, swarm coordination — already exist. The only question is who will scale and deploy them first.

In this race, the strategic direction of the Department of Defense is paramount. The defense tech ecosystem orients itself around what the department demands. If procurement signals remain vague or rooted in yesterday’s paradigms — unmanned but not truly autonomous, GPS-dependent, network-reliant — vendors will continue optimizing for incremental upgrades, not breakthroughs. But if the Pentagon defines autonomy clearly and demands capabilities that survive contested, degraded, and denied environments, industry will respond in kind. They always do.

True AI autonomy is not just a tactical advantage — it’s a force multiplier across logistics, homeland security, and strategic deterrence. Moreover, while these autonomy initiatives may appear highly capital-intensive, the returns on investment would extend far beyond defense alone. Breakthroughs in autonomous systems could benefit homeland security and even the broader economy, which is rapidly automating and robotizing countless operations and processes. Indeed, investing in true AI autonomy for defense could deliver a massive boost to the economy — much like how the space industry’s investments of the 1960s to 1980s spurred widespread innovation and growth, contributing an estimated 2.2 percent increase to long-run U.S. GDP.

However, unlocking the full potential of AI autonomy requires more than simply throwing dollars at defense primes. Deliberate infrastructure investments are needed: shared datasets, national test ranges, interoperability standards, and open access for qualified civilian players. Lowering ecosystem barriers doesn’t just cut costs — it expands the base of innovators able to contribute. The recent procurement reforms and signals from the Trump administration offer reasons for optimism, but they should be followed by sustained action and, critically, by placing experienced industry and field-savvy professionals inside key decision-making chains.

 

 

Vitaliy Goncharuk is an American entrepreneur of Ukrainian origin, specializing in autonomous navigation and AI. He is the CEO of A19Lab, a company developing autonomous systems for drones and robots. In 2022, his previous company, Augmented Pixels, which focused on AI autonomy, was acquired by Qualcomm. From 2019 to 2023, Vitaliy chaired Ukraine’s AI Committee.

**Please note, as a matter of house style War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.

Image: Midjourney

Warcast
Get the Briefing from Those Who've Been There
Subscribe for sharp analysis and grounded insights from warriors, diplomats, and scholars.