Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.
It’s almost cliché but nonetheless true: The U.S. military’s AI-enabled platforms are only as good as their inputs.
Across the force, these systems are transforming how to analyze threats, develop targets, and plan operations. From Palantir’s Maven Smart System to various service-specific initiatives, these systems promise unprecedented speed and precision in processing vast amounts of battlefield data. The impact of this technology is hailed as revolutionary, transforming the way the military thinks, plans, and fights.
But right now, the vast preponderance of data and analysis feeding these systems is oriented toward one thing: enemy forces. The U.S. military is building an extraordinarily sophisticated machine for understanding military capabilities, tracking enemy forces, and developing kinetic targeting options.
What it is not building is a system that understands how adversaries are embedded in the human terrain where competition and conflict occur, how actions will reverberate through complex social, cultural, and political systems, or why certain populations may respond to these actions in unexpected ways.
Concurrently, emerging agentic AI capabilities are poised to transform the digital information environment from which the military pulls much of its data and analysis on contextual dynamics into a corrupted and distorted hall of mirrors. It will do so by automating and radically accelerating the production of synthetic content and “fake news” at unprecedented speed and scale.
A two-pronged response is urgently needed to tackle the challenge.
The first prong is “machine-versus-machine” — deploying the U.S. military’s own agentic AI capabilities to detect and counter adversarial attacks. This line of effort will identify synthetic or manipulated content, take down bot networks, and generate counter-messaging at the speed of AI.
The second prong is profoundly human. As AI-created “slop” becomes increasingly sophisticated, compelling, and omnipresent online, the value of first-hand, human-sourced insight will skyrocket. If the military wants to make sense of this new information ecosystem — online and offline alike — it should anchor its understanding in ground truth. This can be achieved by feeding its AI-enabled central nervous system with structured, consistent, human-generated inputs.
Unless both efforts succeed, the AI revolution risks catalyzing a surge in kinetic military activity that is detached from substantive understanding of the real-world effects that it will generate.
War of the Machines: The Agentic AI “Fake News” Deluge
Agentic AI represents a fundamental leap beyond previous AI systems. Unlike traditional AI, which responds to queries and processes data, agentic AI systems can autonomously set goals, develop strategies, and execute complex, multi-step operations with minimal human oversight. These systems can operate continuously, learning and adapting in real-time, making decisions about how to achieve their programmed objectives while navigating complex digital environments.
China has emerged as an early and aggressive adopter of agentic AI for influence operations. In recent months, Chinese state-sponsored actors have deployed sophisticated agentic AI attacks that exponentially scaled operations that were previously labor intensive. Whereas bot farms once required teams of human operators to create content, manage accounts, and coordinate messaging across platforms, agentic AI now automates the entire pipeline. These systems can generate synthetic personas complete with coherent posting histories, create contextually appropriate content in multiple languages, engage in seemingly natural conversations with real users, and coordinate campaigns across dozens of platforms simultaneously.
The scale is unprecedented. A single agentic AI system can manage thousands of synthetic accounts, each with unique characteristics and posting patterns. These systems can monitor trending topics in real-time, identify influential voices, and craft targeted responses designed to amplify divisive narratives or suppress inconvenient truths. They can detect when specific accounts or messages are gaining traction, and automatically deploy coordinated swarms of synthetic engagement to boost or suppress them.
Perhaps most concerning is the capacity for micro-targeted operations. Agentic AI enables campaigns that operate simultaneously at multiple levels — from macro-level narratives targeting entire demographic cohorts to micro-level operations focused on key individuals. Chinese influence operations have demonstrated the ability to build detailed profiles of military personnel, policymakers, and journalists, then craft personalized influence campaigns delivered through multiple touchpoints. A targeted individual might thus encounter seemingly unrelated content across different platforms, all subtly reinforcing the same narrative, without ever noticing the coordinated nature of the campaign.
The technology also enables unprecedented sophistication in multimedia content generation. Agentic AI systems can create convincing synthetic images, videos, and audio tailored to specific contexts. They can generate fake news articles complete with fabricated sources and quotes, create synthetic satellite imagery suggesting military deployments that never occurred, or produce doctored video footage showing events that never happened. These operations increasingly blend synthetic and authentic content so seamlessly that even sophisticated users struggle to distinguish between them. The result is an information environment where the provenance and veracity of information becomes fundamentally uncertain.
For the U.S. military, this transformation poses an existential challenge to one of its most critical intelligence sources: the open-source information environment. Military planners have grown increasingly dependent on information gleaned from social media, online news sources, and digital communications to understand civilian populations, track adversary narratives, and gauge public sentiment. As agentic AI floods these channels with synthetic content that is potentially indistinguishable from reality, the signal-to-noise ratio collapses. The internet becomes a place where one can ever be certain whether what they are looking at reflects reality or is synthetic manipulation designed to deceive.
This is not a theoretical future threat — it is happening now. As such, the U.S. military should urgently field its own agentic AI systems to sift through this noise. Human-centric capabilities risk being rapidly overwhelmed by the scale and speed of the coming deluge, so machines will be vital. Absent success in this endeavor, the U.S. military’s own AI-enabled intelligence platforms (fed on data scraped from an increasingly corrupted information environment) risk making recommendations based on synthetic realities that bear little resemblance to facts on the ground. The result could be disastrous.
Ground Truth: Essential for Understanding
The coming machine-versus-machine competition to detect and filter synthetic content will be essential to manage the challenge described above. But it will not be sufficient. To reliably establish reality in the information environment — and to make sense thereof — the U.S. military will need direct access to ground truth.
For the past two decades, the U.S. military has had a tortured relationship with the value of contextual social, cultural, economic, and political insights. In the midst of counter-terrorism and counter-insurgency operations across the Middle East and beyond, prevailing wisdom held that “the population is the center of gravity.” Landmark publications like Field Manual 3-24: Counterinsurgency set out the primacy of understanding and engaging with the human terrain.
As argued elsewhere, however, these theories were never translated into consistent action. Instead, the operational force paid lip service to the dogmas of counter-insurgency, while focusing ever more narrowly on mapping and targeting enemy networks. Using advanced intelligence platforms of the pre-AI era, the U.S. military conceptualized opponents as molecules — collections of nodes and linkages that could be methodically dismantled through precision targeting. The results were tactically impressive, but strategically ineffective.
The failure was structural. The U.S. military’s intelligence architecture separated enemy-centric analysis from efforts to understand the civilian and information environments. Enemy networks were viewed in isolation, as molecules suspended in a petri dish, rather than as organic outgrowths of the societies in which they operated. While eliminating nodes and pruning branches, the root structures that sustained these networks went largely untouched because they existed outside the U.S. military’s primary analytical aperture.
Within the fielded force, capabilities responsible for understanding social and cultural context faced an impossible mandate: They were tasked to understand all things civilian and informational with a fraction of the resources, training, and personnel allocated to core intelligence functions. The analytical frameworks prescribed by doctrine provided reductive, fill-in-the-blank templates that were fundamentally unfit for purpose. More importantly, there were no signature deliverables or consistent outputs that these capabilities were required to produce.
These problems persist to the present day. Fundamentally, the U.S. military has never been fully comfortable relying on tactical-level personnel to make sense of facts on the ground. Instead, it has used field personnel as a network of sensors to collect data. The mantra “every soldier is a sensor” captures this approach: The U.S. military calibrated its sensors to gather data points, not to interpret what they saw before them.
A paradigm shift is needed in how the U.S. military thinks about and employs its own people. Those with direct access to ground truth should be empowered not merely to collect data, but to generate structured analytical products that feed directly into the U.S. military’s AI-enabled central nervous system. This means developing consistent frameworks for analysis, training personnel in their application, and establishing signature deliverables that units are required to produce.
The military-adjacent academic community has a critical role to play as well, but it should transform how it operates. Historically, the U.S. military has funded a vast ecosystem of academic research. Notwithstanding recent budget cuts, there is no shortage of Ph.D.s at the Pentagon, within the geographic combatant commands, or at various military-sponsored think tanks and research institutions. These entities generate an abundance of reporting. Yet, their outputs have not typically been oriented toward real-time operational decision-making, and they have not been structured in a consistent manner.
Instead, military-funded academic research has mirrored the outputs of conventional academia, with lengthy publication lags and emphasis on methods and theory over operational utility. This should change. The operational force requires academic partners who can deploy to contested environments, work alongside front-line units, and generate timely, structured analytical products that directly feed AI systems. Academics should serve not as distant theorists but as embedded analysts who bring rigor to the investigative work conducted by military personnel. What is required is a partnership model where academic expertise leverages the access and enhances the organic analytical capabilities of the operational force, which could create an essential and professionalized insight-producing engine within the military.
This integrated approach — combining tactical-level human observation with academic rigor and AI-enabled processing — is an essential step toward compensating for a digital information environment increasingly saturated with synthetic content. When AI systems can generate convincing synthetic content at scale, when bot networks can simulate grassroots movements, and when fake satellite imagery can suggest military deployments that never occurred, the only reliable way to cut through the noise is human observation of ground truth.
What Happens Next?
Without a direct pipeline that delivers structured, consistent ground truth at scale, the U.S. military’s AI intelligence systems will optimize what they can based upon the inputs fed to them. This will mirror the failures of the past two decades, wherein the U.S. military mastered the targeting process but lost war after war. AI-enabled intelligence systems will get exponentially better at identifying nodes, mapping networks, and developing kinetic strike packages while creating an increasingly compelling illusion of understanding. The sheer volume of data, the speed of processing, and the visual sophistication of the outputs will suggest that users know what they are looking at.
But this apparent mastery will mask a deepening ignorance of context and meaning. Decision-makers will not be able to reliably vet machine-generated outputs against a robust human-generated understanding of ground truth. Indeed, the reliability of digital inputs into AI-enabled intelligence platforms extends beyond the information environment into the core of the targeting process, where uncertain data quality and adversarial data poisoning are mounting concerns.
Within the digital environment, the U.S. military’s agentic AI systems will be battling to distinguish authentic content from synthetic manipulation. But even if this problem is adequately addressed, current systems will struggle to reliably explain why a targeted network regenerates within months. They will fail to anticipate how action in one location might cascade through social and economic systems to produce unexpected second- and third-order effects elsewhere. They will be poorly positioned to identify the contextual dynamics that explain why certain populations are receptive to adversary influence while others resist it.
These are precisely the sorts of insights that military personnel and their academic partners should provide — but only if empowered, trained, dispersed, and made accountable to do so.
The U.S. military stands at a decision point. One path leads to AI-enabled excellence in traditional military operations rooted in ground truth and informed by genuine understanding of context and consequences: faster targeting cycles, more efficient operations, better force protection, and superior kinetic effects, all underpinned by AI-enabled situational understanding anchored in structured and consistent human insights.
The other path leads to AI-enabled tactical brilliance combined with strategic blindness: an exquisitely optimized machine for winning battles while losing wars and for achieving lethal kinetic effects while remaining deaf to their reverberations across society and through time. The choice is not binary, between AI-enabled speed and scale versus human insight and understanding. The question is whether the U.S. military will invest in fusing the two, or watch its newest capability become a dangerous liability.
Nicholas Krohley, Ph.D., is the founder of FrontLine Advisory. He has over 15 years of experience working with governments, corporations, and civil society groups worldwide, with a specialization in intelligence analysis and resistance. He leads the Resistance Hub for the Irregular Warfare Initiative — Europe, and holds Ph.D. and M.A. degrees from King’s College London and a B.A. from Yale University.
Image: Midjourney