Supporting Joint Warfighting with Mission-Level Simulations
How can the United States develop confidence that defenses against hypersonic threats will be effective? How will the Pentagon’s 5th-generation aircraft perform against China and Russia? The only practical way to explore these questions is through computer simulation. But does the United States have the appropriate tools to perform the analyses?
The United States is in a position it has not been in for the last 30 years: American military supremacy is no longer assured. Peer nations — namely, China and Russia — pose credible, multi-domain threats to U.S. national interests. Competing with Beijing and Moscow requires that defense planners and acquisition professionals deeply understand U.S. capabilities and how they can best counter America’s adversaries.
Simulations are one of the few secure, cost-effective resources for realistically testing U.S. military operations that are needed to deter its competitors. The United States has access to many computer simulation capabilities. However, when it comes to planning multi-service operations, multiple simulations are rarely integrated. Instead, simulations are exercised individually. Output of one simulation becomes, to the extent possible, input to the next.
There is also value in combining simulations that represent different force levels. The Department of Defense should combine collections of tactical-level, high-fidelity weapon system simulations with more abstracted campaign-level simulations among the different services and intelligence community for accurate analysis of joint operations. There are hard-won, billion-dollar lessons gained from the 1990s when there were several attempts to create this kind of resource, all of which failed to reach their goals. The lessons include organizing simulation programs with modern management structures, minimizing development risk, and establishing user buy-in early. The Pentagon should apply these lessons by creating a simulation capability for exercising new joint concepts. It has recently taken a step in the right direction with the Integrated Defense Analysis Capability — a new joint simulation program in the Office of the Undersecretary of Defense for federating existing simulation systems. The Integrated Defense Analysis Capability may yet achieve important success if it can avoid the mistakes of the past.
Why Joint Simulations, And Why Now?
Finding the right mix of simulation tools for campaign-level and mission-level operations is paramount for defense planners. One size will not suffice. Too much detail can be as unhelpful as too little. Combining simulations that differ by the level of forces they represent is analogous to using road maps of differing scale: long-distance road trip planning calls for a wide-area map with major highways, but eventually higher-resolution maps are necessary to navigate to a precise address. Both sets of maps together present a viable, end-to-end route.
Existing defense simulations are similar to maps that might differ in scale by offering different levels of resolution. Campaign models (picture the board game Risk) present big-picture, low-resolution representations of the simulation world. They represent entire battle formations as single entities, and rarely (if ever) include individual weapon systems. They instead use aggregate performance parameters derived from average performance for analyses in specific context. Campaign-level simulations can run rapidly at an appropriately low resolution, trading precision for speed. Several high-level simulations offer the appropriate resolution to examine campaign-level questions, and their results are well understood and widely accepted. These simulations represent joint conflict at high levels of abstraction, allowing them to quickly illustrate multiple scenario variations.
However, as in road trip planning, some parts of the journey require more granular direction. Highly-detailed, mission-level simulations fill this role by offering more of a “street view” that represents interactions between entities at a much higher resolution. They could simulate a scenario as granular as an individual aircraft engaging an array of air defense missile launchers. Some can integrate a human to make critical decisions in real time, rather than modeling command-decision processes in software as in a campaign-level simulation.
Ideally, mission- and campaign-level simulations should complement each other. However, given that government-owned, mission-level simulations typically belong to the various services, they only authoritatively model a single part of the joint environment. The U.S. Air Force might be forced to combine high-resolution aircraft models with approximate models of adversary or other service capabilities. This creates a challenge to joint planning.
The Pentagon Can Learn From Past Mistakes
How can the Defense Department make the most of campaign- and mission-level simulations? Since the military doesn’t have much experience or data from large-scale operations to simulate multi-domain operations, defense professionals should integrate system models from all of the services and the intelligence community into a highly detailed representation of a complete joint environment. Given security concerns, it’s not surprising that live exercise opportunities to explore existing multi-domain operations are limited. However, while using simulations to explore difficult problems is a viable alternative, it is also more easily said than done.
Simulation remains America’s best approach to examine future military operations. It not only offers a risk-free venue for testing new concepts, but also enables exploring large-scale defensive or offensive operations with advantages like maintaining secrecy and not provoking adversaries. Three important lessons from past simulation program failures, seemingly obvious in hindsight, could help the military going forward.
First, align program responsibility and authority. Program management should not only have the responsibility for success, but also the authority to control funding. An example of what not to do can be instructive. In 1995, the U.S. government’s Joint Simulation System program was designed to construct a training-focused joint simulation capability by combining the authoritative models from each service. However, the program manager responsible for the program’s success had little de facto authority over most of the participants, including the service components nominally under his supervision. Services had their own funding, not controlled by the program manager. Individual service needs often took precedence over program needs, sometimes resulting in unmet program goals.
Second, create buy-in among those who participate in the simulation. Another 1990s Department of Defense effort, the Joint Modeling and Analysis System, was intended to create a mission-level simulation capability that would join high-fidelity, service-specific, simulations. The plan required that these service-specific simulations comply with the program’s standards, necessitating rebuilds of the existing simulations and yet further modifications with subsequent changes to the standard. Eventually, frustrated users in the services stopped caring. Most of them already had simulations that sufficed, and they did not see much benefit in the costly replacement. The need to continually revise their models for joint compliance would not, in their view, produce a reasonable return on investment. Blindness to meeting user needs, along with several other management and technical challenges, featured significantly in the JMASS’s eventual demise.
Finally, demonstrate flexibility — when it comes to simulations, one size does not fit all. The Joint Warfare System was yet another 1990s effort to produce a department-wide simulation capability. In this case, the monolithic system was intended to simulate a campaign from “port to foxhole,” eliminating the need to use separate models to represent processes like logistics and combat. The Joint Warfare System tried to tackle the technical challenge of natively providing multiple levels of resolution, thus allowing the user to choose levels of fidelity. In its ambition, the system was continually over budget and behind schedule — it never produced a useful capability for analysts.
Today, campaign-level models are available and widely used around the Defense Department. However, they should be integrated with complementary mission-level simulations. Continuing to use a monolithic resource will be costly.
A New Way Forward
Designers should strive to create mechanisms to allow different simulations to communicate with each other. Combining simulations this way offers a promising way forward. In operation, it is roughly analogous to a video teleconference: Invitees call and register with a conference center and are allowed online with other registrants. Individuals can participate from anywhere, interact with each other in numerous ways, and end the conference by hanging up. The process is easily repeatable. Establishing a cooperating combination of simulations (often called a federation) uses the same basic concepts, but the “callers” are simulations, and the lead time to reach interaction is measured in months, given the intricate details involved.
The technology necessary to create this federation already exists and is constantly improving. The U.S. defense and intelligence communities maintain a rich base of simulation resources and models for potential collaboration. The Air Force’s Advanced Framework for Simulation Integration and Modeling includes both fully automated and human-controlled simulation capabilities for existing aircraft. The Navy operates the Next Generation Threat Capability, comprising a similar capability for naval forces from airframes to ocean-going vessels. The Army uses One Semi-Automated Forces to simulate individual soldiers, armored vehicles, artillery systems, and other assets. These simulation systems can also include humans in the loop for control. Meanwhile, several intelligence centers have jointly developed the Integrated Threat Analysis and Simulation Environment based on actual intelligence of adversary capabilities. It evaluates the interaction between U.S. weapons, adversary capabilities, and forces. It integrates high-fidelity models into a holistic “systems of systems” representation of current and future adversary operations.
The Integrated Defense Analysis Capability demonstrates the analytical value of federating existing systems. It already draws on the intelligence community, U.S. Navy, and U.S. Army simulations mentioned above, and aims both to leverage and advance the Pentagon’s existing service-owned simulation capabilities. For far less than one-hundredth of the cost of the 1990s simulation-development efforts, it creates a joint, mission-level model of fully validated and authoritative adversary, U.S. naval air, and U.S. ground combat forces. The vast majority of the Integrated Defense Analysis Capability funding has been given directly to the simulation owners to improve their models and support joint integration. The improvements are themselves enduring capabilities.
The Integrated Defense Analysis Capability effort appears to be on the right path. Its capabilities have been used recently to help the intelligence community assess and update a live operational plan. A key part of the simulation relied on integrating human decisions with models of adversary tactics and capabilities. Simulation software engagements were directed by human commanders, producing results that cannot be matched by high-level campaign models. The simulation and resulting analyses led to revision of the operational plan, which will run through a second simulation analysis when ready.
In a second simulation event, the Integrated Defense Analysis Capability demonstrated naval air in support of army ground operations against a prepared threat position. In 2020, the Integrated Defense Analysis Capability will add new capabilities from the U.S. Air Force into the federation. The existing federation lacks cyber models, which should be added in the near future. Coordinating logistics among multiple organizations may slow the pace of executing these simulation events, but the much-needed improvements should ultimately help address the needs of multiple combatant commands and other potential users.
Other parts of the proof of concept are aimed at avoiding past mistakes. The program is establishing an Integrated Defense Analysis Capability community of interest to incorporate feedback from end users—an approach that has proven successful in advancing other government programs. The Integrated Defense Analysis Capability is attempting to create a library of lessons learned and reusable artifacts to make building the federation easier and faster. While initial activities have supported operational needs, the program will also provide data for analytical support to the Office of the Secretary of Defense and service customers.
The Military Needs Authoritative Simulations
Modern warfare is increasingly complex. As a result, the military needs an authoritative, all-encompassing simulation capability more than ever. The Pentagon already owns resources that can be combined to address many joint analysis problems. Existing campaign-level models supply the necessary scale, while mission-level models can provide important detail, like when to keep human decision makers tightly in the simulation loop to explore unprecedented new-force concepts. The Integrated Defense Analysis Capability approach should be the foundation for future efforts to meet joint simulation needs expressed decades ago.
Robert Richbourg is a research staff member in the Joint Advanced Warfighting Division at the Institute for Defense Analyses. He holds a Ph.D. in computer science and has over 25 years of experience in modeling and simulation.
June Rodriguez, U.S. Air Force (ret.), works for the MITRE Corp. as an operations research/systems analyst at Defense Intelligence Agency’s Missile and Space Intelligence Center. She holds a Ph.D. in operations research from the Air Force Institute of Technology, with focus on modeling and simulation, statistics and pattern recognition using machine learning algorithms.
Lt. Col. David M. Gohlich, U.S. Army, is an operations research/systems analyst at the Defense Intelligence Agency’s Missile and Space Intelligence Center. He earned a M.S. in operations research from Georgia Institute of Technology in 2011.
James N. Bexfield, U.S. Air Force (ret.), is an adjunct research staff member in the Strategy, Forces and Resources Division at the Institute for Defense Analyses. He is a retired member of the Senior Executive Service from the Office of Cost Assessment and Program Evaluation.
The views, opinions, and findings expressed in this paper should not be construed as representing the official position of either the Institute for Defense Analyses or the Department of Defense.
The appearance of external hyperlinks does not constitute endorsement by the U.S. Department of Defense of the linked websites, or the information, products or services contained therein. The Department of Defense does not exercise any editorial, security, or other control over the information found in these locations.
Richbourg, Bexfield, Gohlich, Rodriguez | IDA NS D-10909 Log 19-000533