When the world's at stake,
go beyond the headlines.

National security. For insiders. By insiders.

National security. For insiders. By insiders.

Join War on the Rocks and gain access to content trusted by policymakers, military leaders, and strategic thinkers worldwide.

Cogs of War
Cogs of War

Every Soldier a Software Builder: Governing the Army’s New Digital Workforce

March 30, 2026
Every Soldier a Software Builder: Governing the Army’s New Digital Workforce
Cogs of War

Cogs of War

Every Soldier a Software Builder: Governing the Army’s New Digital Workforce

Every Soldier a Software Builder: Governing the Army’s New Digital Workforce

Anthony A. Joyce
March 30, 2026

Over the past decade, the Department of Defense has tested internal software development through efforts like the Air Force’s Kessel Run, the Army Software Factory, and the Marine Corps Software Factory. Those efforts showed that military personnel can build useful software when given the right tools and infrastructure.

In its push to make better use of data, the Army fielded powerful digital platforms as a service across the force, such as Palantir’s Army Vantage and the Department of Defense’s GenAI.mil. These programs were meant to improve analysis and decision-making. But they also did something else: They gave soldiers built-in tools to create their own software, including AI agents. The Army no longer has to depend almost entirely on outside contractors to build software. Soldiers across the force can now build tools inside approved Army platforms.

This new capability creates a new problem. Without a clear way to manage it, the Army risks ending up with overlapping tools, duplicated effort, and promising projects that fade as quickly as they appear. The Army has created a powerful engine for innovation, but it still lacks a clear way to identify what works, support it, and scale it across the force.

Much of the discussion about this new reality of soldier-led software development focuses on who develops software, but the real question is: How can the Army identify useful tools, discard weak ones, and expand the ones that actually solve real problems? In answering that question, I’ll focus on software built inside Army-approved platforms instead of AI-assisted coding or autonomous development, which raises separate governance questions.

From Data to Tools

The Army’s public conversation about artificial intelligence has focused heavily on data: better dashboards, better information, better decisions. That frame misses something important. Better data does not fix clunky workflows or eliminate repetitive staff tasks, software does. While the Army adopted these platforms for data analysis, they unintentionally acquired tools that now let soldiers build software from within these approved programs.

These platforms already provide tools, data access, and built-in security approvals soldiers need to create useful software. In the past, fielding any new Army software meant navigating a lengthy cybersecurity accreditation process — the Authorization to Operate — that could take months or years. Since these platforms are already accredited, the tools built inside them inherit those approvals automatically, without a separate review.

This shift — from analyzing data to building tools that act on it — creates a new professional role inside the force: the soldier-developer. A soldier-developer is not a software engineer: Their role is to create practical problem-solving software tools to address local unit and operational needs, from automating a tedious planning task to creating an AI agent for wargaming. They are not building major Army systems from scratch. Rather, they are using approved tools inside approved systems to solve useful problems quickly.

The Soldier-Developer Ecosystem

As more soldiers build tools, the Army is creating an internal competition in which the best solutions rise through use. A private’s tool can displace a colonel’s if it works better and people actually adopt it. The best capability wins regardless of where it comes from. The Defense Department’s Artificial Intelligence Strategy names this explicitly: competition by small teams with transparent results, not centralized planning, should drive AI innovation.

Three forces sort good tools from bad ones without central management. Soldiers rotate frequently between posts, so tools that only one person can maintain disappear when their creator leaves. Competing duties drive out tools that require constant upkeep. And when multiple soldiers tackle the same problem, better solutions displace weaker ones. A tool that survives all three pressures has already proven its worth under real conditions.

Finding What Works: Evidence Over Prediction

This approach creates conditions for real use to surface the answer, then builds the infrastructure to expand what the evidence reveals. This departs from the traditional model, where the Army predicted its software needs and funded development in advance. That made sense when building software required scarce specialists and expensive infrastructure. Army-approved platforms change the math. When soldiers can build tools inside existing, approved environments at low cost, the Army can observe actual usage and demand rather than predict it.

When soldiers build tools across the force, two things happen at once. They generate targeted solutions to local problems, and they create a discovery mechanism for the Army as a whole. Because tools compete through real use, platform data reveals which solutions persist, spread to other units, and deliver measurable value. By the time a tool reaches the threshold for broader review, the hardest question in software — “Will anyone actually use this?” — has already been answered by the force itself.

A System for Managing the Soldier-Developer Ecosystem

Organizations often find better solutions when they let people experiment than when they try to predict everything from the top. But experimentation without structure produces clutter instead of capability. The Army needs both: room for local building at scale and a clear path for proven tools to move upward without suppressing the experimentation that generates them.

The model I propose has three levels: local tool building, Army-level review and support, and enterprise integration across the force. These levels provide a governance architecture for how the Army turns a useful local tool into a scalable and lasting capability.

The institutional actors named here reflect the Army’s organizational structure following the activation of the U.S. Army Transformation and Training Command on October 2, 2025. I acknowledge that senior Army leaders with current visibility and insight into the U.S. Army Transformation and Training Command’s evolving relationships are best positioned to determine the precise alignment of responsibilities. The logic of this proposed governance architecture does not depend on perfect organizational wiring: Rather, it makes the case that the Army needs to track what tools are being built, review the tools that show promise, and own and sustain the tools that prove valuable.

Level 1: Local Tool Building

At this level, soldiers build tools inside Army-approved platforms. No new authority is required, and this activity is already happening today inside existing platforms. However, there is a small friction point at this level related to platform usability, which prevents widespread tool building. My experience teaching faculty members and students to use these platforms at the Command and General Staff College reveals that the biggest barrier to broader adoption is not what these platforms can do, but how hard they are for operational soldiers to use since they were built for data analysis and data specialists.

The Army has seen this before. Moving from command-line systems like Microsoft Disk Operating System to graphical systems like Windows did not reduce what computers could do: It made those capabilities accessible to everyone. If Army platforms stay optimized for specialists, the soldier-developer population will never reach the scale and volume needed to reveal which tools are worth scaling.

To maintain visibility of the tools soldiers develop at this level, all Army-approved platforms should maintain an automatic tool registry. Every tool created inside the platform gets registered automatically with no extra action required from the builder. The registry records who built it, what unit, what problem it solves, what data it uses, how often it gets used, and how many active users use it. A shared Army-wide registry pools this information across all approved platforms, giving leaders visibility into what exists, what is spreading, and what has proven its value. Futures and Concepts Command, or its designee, should own this registry.

Review Threshold

The move from Level 1 to Level 2 should be evidence-driven, not application-driven. Platform usage data tracks how tools spread across the force. When a tool is still in active use after ninety days, ranks among the most-used tools on the platform, and has been adopted by at least two separate organizations, it triggers a formal review. Persistence weeds out abandoned experiments. Relative ranking stays meaningful at any scale. Cross-unit adoption proves the tool solves a problem broader than one team.

Triggering a review does not advance a tool automatically: It surfaces a proven tool for a decision. Usage data tells the Army what the force has already chosen: Leaders decide which choices to actively support. This also protects scarce engineering resources, since support goes to tools with demonstrated traction, not good ideas that will never scale.

For tools built in operational units, the local Operational Data Team conducts this first review, assessing mission impact and local sustainability. For tools developed outside the operational force, a designated expert from the sponsoring organization — such as their Chief Data and Analytics Office — performs the same review. In either case, if the tool shows potential for broader adoption, the sponsor can push it forward for formal technical review by the Combat Capabilities Development Command. That creates one clear path upward and keeps scarce engineering resources focused on tools that real users have already validated.

Level 2: Army-Level Support

Tools that pass the review should receive formal Army support. This level should exist because soldier-developers cannot maintain software indefinitely. Soldiers rotate, duties accumulate, and even genuinely useful tools decay without an owner.

In this vision, responsibility transfers to the Army organization best placed to own the tool, typically a Future Capability Directorate or a functional organization within Combined Arms Command responsible for the relevant tool’s scope. That organization becomes the tool’s sponsor and advocates for its continued use.

Technical support, updates, and long-term maintenance go to an engineering organization like the Combat Capabilities Development Command or the Army Software Factory. These teams strengthen security, manage dependencies, and keep the tool updated long after its original builder has moved on. Level 2 does not make a soldier-built tool a formal Army program. Instead, it gives the tool a named organizational owner and the support needed to survive.

Level 3: Enterprise Use

A small number of Level 2 tools will prove valuable enough to warrant Army-wide enterprise use. In this new governance architecture, before the Army approves any new software purchase, the Army office responsible for that capability should be required to check the Army-wide tool registry as part of its standard Doctrine, Organization, Training, Materiel, Leadership/Education, Personnel, Facilities, and Policy analysis. Just as the Army considers non-materiel solutions before buying new equipment, it must now consider proven soldier-built tools before buying new software.

That check leads to one of four outcomes: adopt an existing tool directly and avoid a new purchase, develop a promising tool further using existing legal authority to build and test software quickly, use the soldier-built tool as a working model that sharpens what the Army asks industry to build, or confirm no useful internal solution exists and proceed to an outside purchase with confidence.

The March 2025 “Directing Modern Software Acquisition to Maximize Lethality” memo directs the Pentagon to cut procurement bureaucracy and get software to warfighters faster. That effort speeds the outside pathway. The system proposed here strengthens it: When the Army goes to industry with a validated prototype and real usage data instead of an untested specification, it gets better software faster and spends less money rebuilding what soldiers have already built.

Three Reinforcing Loops: Selection, Governance, and Incentive

Together, these mechanisms create three reinforcing loops. The selection loop identifies which tools are actually useful through usage data, the Army-wide registry, and automatic review triggers. The governance loop keeps good tools alive by assigning organizational owners and engineering support. The incentive loop makes the system self-sustaining, and it operates at two levels simultaneously.

At the individual level, the incentive loop identifies when a soldier-developer’s Level 2 tool provides a solution to a capability gap, eliminating or reducing an external acquisition action. The cost avoidance and operational value produced is precisely the kind of measurable contribution the Army Incentives Award Program was designed to reward, with awards of up to $25,000 for innovations producing measurable savings or improved capability. At the institutional level, every soldier-developer tool adopted in place of a contracted solution frees up the Army’s budget for the next requirement. The Army is financially motivated to keep the cycle running: solving problems from within makes the next problem more affordable.

These loops reinforce themselves. Incentives drive more building at Level 1, more building generates more review triggers, more reviews produce more tools with Army-level support, and more supported tools create more opportunities for enterprise use. No central micromanagement is required.

What Senior Leaders Should Do

Three actions would make this governance architecture real.

First, formation commanders should direct their Operational Data Teams to serve as the review body for tools built in their units. Organizations outside the operational force should designate equivalent local experts, such as their Chief Data and Analytics Offices, to do the same. This filter ensures only proven, mission-relevant tools advance.

Second, Army leadership should direct the Combat Capabilities Development Command to serve as the single formal technical review authority for tools that pass initial review thresholds. Concurrently, Futures and Concepts and Command and the Combined Arms Command should direct their respective Future Capability Directorates and functional proponents to prepare to take long-term ownership of tools that align with their portfolios. Good ideas must not die for lack of a home.

Third, and most immediately actionable: mandate a registry review before approving any new software purchase, as part of standard Doctrine, Organization, Training, Materiel, Leadership/Education, Personnel, Facilities, and Policy analysis. This single policy change costs nothing to implement and will immediately surface the need for a consolidated registry and eventually identify internal software capabilities the Army already possesses.

Conclusion

The Army has two things few organizations have: volume and scale. If the Army leverages its volume of soldier-developers to conduct bottom-up innovation and experimentation at the local level, it can scale valuable software tools and become one of the world’s largest software developers. The Army doesn’t achieve this by hiring engineers or standing up new organizations. Rather, the Army must recognize its soldier-developers form a broad, decentralized software-building workforce, and create a system that adopts and scales their innovative creations.

A shared Army-wide registry makes this activity visible. A three-level process focuses attention on tools that have already proven their value rather than tools leaders only assume will matter. Automatic review triggers surface what the force has already chosen without requiring central oversight of everything soldiers build. A self-reinforcing cycle of selection, governance, and incentive sustains the ecosystem while freeing resources for other needs.

The Army does not need to become something it is not. It needs to recognize what it is already becoming and make that transition deliberate rather than accidental.

Soldiers are already building. The only question is whether the Army builds the system to scale what they produce.


Anthony A. Joyce is a U.S. Army officer. He is a FA59 strategist and instructor at the U.S. Army Command and General Staff College. Additionally, he cofounded an artificial intelligence startup in 2023 and is an award-winning tabletop game designer who has designed games for companies including Netflix, Meta, and Wizards of the Coast. As a strategist, Joyce served at all levels of government to include the Office of the Secretary of Defense (Policy), Headquarters Department of the Army, and the U.S. House of Representatives as an Army liaison.

The views here are those of the author and do not represent the opinions or positions of the Command and General Staff College, the U.S. Army, the Department of Defense, or any part of the U.S. government.

**Please note, as a matter of house style, War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.

Image: Sgt. Gianna Sulger via DVIDS.

Become an Insider

Subscribe to Cogs of War for sharp analysis and grounded insights from technologists, builders, and policymakers.