Some Modest Proposals for Defense Department Requirements Reform
Defense policy analysts are a fractious bunch. But if there is one thing they all seem to agree on, it is a shared distaste for the current Defense Department acquisition process. Calls for acquisition reform regularly and invariably belittle the department’s operational requirements identification process — the crucial first step in producing or fielding new capabilities — as slow, unresponsive, or lacking imagination. Writing earlier this year for War on the Rocks, Jarrett Lane and Michelle Johnson describe the Joint Capabilities Integration and Development System (JCIDS) — a key step in the requirements identification process — as the “Joint Cutting-edge Ideas Death Sentence.”
While it’s a catchy moniker, their critique is off target. Lane and Johnson single out JCIDS, which is actually one of the Pentagon’s more successful processes, and blame it for problems with other parts of the broader acquisition process. However, we agree that the JCIDS process is flawed and that there are a number of reforms to be made that could improve defense acquisitions.
In this article, we aim to clarify misconceptions about JCIDS in the context of a broader discussion of acquisition reform, based on our practical experience consulting a wide variety of clients on JCIDS analysis as well as academic research into the acquisition process. We describe the process, note the key problems with it, and propose potential solutions. We hope these incremental, realistic proposals contribute to a measured discussion of Defense Department requirements reform. Optimizing JCIDS is essential for ensuring that the U.S. military not only maintains its overall effectiveness and fighting edge, but does so in an efficient, joint, and cost-saving manner.
What is JCIDS?
JCIDS is the main process for initiating both materiel procurement, in which the Pentagon purchases new capabilities and non-materiel changes, in which it adjusts military doctrine, training, and other policy areas. The process links strategic guidance from the president and secretary of defense to the development of capabilities that make their way to the frontline combatant commands. Through this process, the services, combatant commands, and other defense agencies jointly determine what “ways and means” they need to complete the missions that civilian decision-makers order them to do.
Importantly, JCIDS should not be conflated with other procurement activities such as the Planning, Programming, Budgeting, and Execution System (PPBE) or the Defense Acquisition System. PPBE is the annual resource allocation process and deals with what capabilities the Pentagon will actually purchase or fund. Meanwhile, the Defense Acquisition System builds the physical capabilities specified by JCIDS, thereby linking PPBE to JCIDS. As the graphic featured in Lane and Johnson’s article shows, JCIDS is only the start of the long, winding military acquisition process.
It is also important to differentiate JCIDS from other requirements processes. Each service has its own requirements process for truly service-specific issues, like tanks for the Army or airborne electronic warfare pods for the Air Force. Lane and Johnson approvingly cite another joint system, the U.S. Special Operations Command version of JCIDS, the Special Operations Forces Capabilities Integration and Development System (SOFCIDS), as having better speed and outcomes than JCIDS. But these are entirely different systems, with different funding, organizational structures, and goals. Specifically, Special Operations Command controls its own money. It has unique access to funding lines such as Major Force Planning-11 dollars, which enables it to essentially fund and implement its own capability solutions. In contrast, it is much harder to get access to similar pots of money at the joint level, and JCIDS does not automatically provide funding when validating joint capability requirements.
Additionally, Special Operations Command controls every aspect of its requirements process. While in JCIDS individual services can sometimes push back against having to deal with “joint” issues, this friction can be largely avoided inside Special Operations Command’s structure, since the services’ special operations forces are technically subordinate to Special Operations Command. Conversely, inside JCIDS the Joint Staff is barred by legislation from executing its own capability solutions, and cannot “command” the services in the same way Special Operations Command can manage other special operations forces. It is unrealistic, therefore, to hold up SOFCIDS as a model for JCIDS; the two are similar but incommensurate systems.
Another often-overlooked aspect of JCIDS is the non-materiel aspect of the requirements generation process. This is where the Pentagon can mitigate capability gaps not by purchasing new material solutions, but rather by adjusting policies and processes, like joint doctrine or training. Lane and Johnson, for example, do not discuss the process for implementing non-materiel changes through JCIDS. Ignoring non-materiel innovation misses a large part of what the JCIDS process does.
Unfortunately, the process for implementing non-materiel requirements remains underutilized and ineffective at the joint level for two related reasons. First, non-materiel recommendations do not tend to attract major funding, so these proposed solutions don’t lead to the same flowering of activities as materiel recommendations. Second, non-materiel recommendations tend to exit the requirements process because only materiel capability solutions can continue from JCIDS onto the Defense Acquisition System. Non-materiel solutions shuffle off to other institutional processes or venues, whose stakeholders might quietly kill a recommendation or make only minor changes to satisfy the task.
Where JCIDS Goes Wrong
We do agree with Lane and Johnson that the JCIDS process is far from perfect, although we disagree slightly with what exactly causes the problems. In our view, JCIDS suffers from three major issues. First, the process is better for dealing with long-term capability development than with short-term capability fulfillment or development for combatant command needs. Second, JCIDS cannot effectively enforce change below the joint level, mostly due to statue (Title 10) and a lack of influence (in Pentagon parlance, money). Third, the processes used to achieve consensus within JCIDS are the true way that cutting-edge ideas are killed: either through a single non-concurring opinion among the combatant commands and services, which terminates the project, or death by a thousand paper cuts as the recommendations are gradually diluted until they become practically meaningless. This third area is where Lane and Johnson’s critique rings truest.
Short- Versus Long-Term Capability Planning
There is a disconnect between the Pentagon’s processes for determining immediate, short-term capability gaps at the combatant command level and broader, long-term gaps at the joint level. Each year, combatant commanders give the Joint Staff lists of key problems to fix. These lists, however, tend to be a mix of shorter-term issues, fail to reflect the full range of warfighter needs, and rarely touch on wider capability gaps. This is understandable given that commanders (and their staffs) are in a specific command for only a few years. Moreover, in wartime short-term capability issues can be life-savers.
Still, combatant commanders inadvertently contribute to broader dysfunction by prioritizing stop-gap fixes and ignoring longer-term capability issues, to which the JCIDS process is more conducive. True rapid capability needs can — and should — be solved through non-JCIDS pathways. These quick fixes, however, tend to be one-off off-the-shelf solutions that only perfunctorily fill the capability gap, are difficult to maintain, and are soon discarded.
Although rapid capability processes and combatant command gap assessments can, in theory, lead to longer-term JCIDS assessments, this rarely occurs in practice. Combatant commanders have less interest in identifying long-term gaps or shepherding them through the JCIDS process since capabilities usually arrive after they rotate out. Nevertheless, they and their staffs should try to participate in JCIDS to deal with these longer-term issues. Otherwise, joint capability assessments will remain primarily motivated by internal Pentagon priorities rather than objective, down-range operational demands. For its part, the Joint Staff needs to both listen better to capability-based concerns coming out of the commands and incentivize commanders to prioritize longer-term thinking.
Change Below the Joint Level
The root of the second problem — JCIDS’ inability to effect change below the joint level – is that the Joint Staff is expressly barred by the Goldwater-Nichols Act and current U.S. Code in Title 10 from unilaterally making certain major changes to how the military operates. As a Joint Staff-owned process, JCIDS must rely on the services to implement its recommendations. As a result, the joint force is actually not built as a joint force — it is a product of the agglomeration of service-parochial capabilities. To be sure, the joint force has become more joint over the past few years. Still, the divide between the services and joint equities — for instance, bureaucratic fights over unique capability needs or who “owns” what missions — hobble the department’s ability to build and fight as one force. JCIDS is intended to iron out these issues, but without the authority to directly enforce change, it is less effective than it could be.
Finally, the processes used to achieve consensus within JCIDS stunt speed and imagination. Here we agree with Lane and Johnson that there is an imbalance between consensus and speed. JCIDS is an incredibly open process and lets virtually anyone comment on any ongoing analysis. This opens the door for unhelpful comments from non-participating, ill-informed, or self-appointed “stakeholders” at each stage of document review. A representative can also slow review by critically non-concurring with a finding even if all other stakeholders support the analysis.
The size and complexity of the Defense Department makes it that much harder to achieve consensus on recommendations, inherently difficult in all large organizations. JCIDS documents must go through multiple rounds of review and at any point the document can be sent back for edits, leading to delays in approval. These practices were designed to foster collaboration and ensure appropriate review, but the degree of agency granted to informed and uninformed participants subverts the original intent of consensus and kills good ideas.
Non-Issues for JCIDS
Lane and Johnson identify several issues with JCIDS that we don’t see as problematic. First, as noted, they think JCIDS analysis is slow — we concur, but believe they misidentify the reason for the problem. They write: “JCIDS has evolved into a slow, onerous process that values finely crafted requirements over both innovation and the timely delivery of capabilities.”
It is cliché, but true, that when acquiring new capabilities the Defense Department often acts like a child encountering a shiny new toy: gimme! Many proposals for defense acquisition reforms, therefore, focus on how to get more toys faster. But setting aside the problems with consensus-building, the materiel part of JCIDS is not the cause of slow acquisition. In fact, compared to the later steps in the process, it is relatively quick and straightforward to develop an Initial Capabilities Document for materiel acquisitions. When Lane and Johnson call the requirements process “inward-looking and slow,” we contend that the true blame lies with other parts of the acquisition process. For instance, long product development and production timelines in post-JCIDS stages of materiel development can make once-innovative materiel capabilities obsolete by the time they are fielded. This is not an issue with JCIDS.
Second, the authors write that JCIDS analysis prejudges capability solutions: “the Defense Department typically predetermines the solution it seeks, spending far too little time analyzing and truly understanding the problem.” In fact, however, the early stages of JCIDS-appropriate analysis — specifically capabilities-based assessments — actually take an agnostic view of whether gaps should be filled with a materiel or non-materiel solution. Analysts also take great care to identify capability needs and gaps first, before recommending any solutions. Lastly, the resulting recommendations are often intentionally broad so that later stages of the acquisition process or non-materiel change venues can more specifically determine precise solutions. Although we agree that some JCIDS analyses have gone forward with predetermined solutions, this represents the exception rather than the rule, and tends to only occur when analysts explicitly go against JCIDS guidance and best practices.
Third, Lane and Johnson believe that including a larger set of competitive stakeholders, including those from industry, earlier in the process will mitigate the problem of predetermined solutions. This strikes us as fundamentally inappropriate. It is the Pentagon’s job to determine what missions it needs to fulfil and industry’s job to help them fill it. Industry is liable to be even more biased towards reverse-engineering requirements to fit a pre-existing solution than the services or the Joint Staff, because private companies are profit-seeking. There is a clear rationale for why requests for proposals from industry don’t occur until after “Milestone A,” the decision to proceed with a materiel solution: You can’t know what to buy until after you know what you need to do. We agree with Lane and Johnson about the importance of correctly identifying capability gaps, but it’s not clear that earlier involvement from industry stakeholders will help do this. Moreover, we believe they vastly exaggerate existing problems with gap assessments.
Improving the Requirements Process
The easiest way to sharpen the requirements process is to ensure that the gaps cited by combatant commanders actually drive joint requirement analyses. Some of these gaps will turn out to simply be capacity issues — more special operations forces, more drones — while others will be capability issues, which should immediately kick off studies like capabilities-based assessments or other analyses suitable for JCIDS. The loss of Joint Forces Command — a four star combatant command that served as a key linkage between warfighting components and the Joint Staff—degraded this connection to JCIDS. The Joint Staff needs to sponsor or direct more studies that explicitly focus on warfighter needs, not esoteric joint priorities or solutions in search of a problem. Additionally, other department components and activities, such as service warfighting centers, futures commands, and wargame/exercise outcomes, should be institutionally required to drive JCIDS (as they drive service capability development). This will improve capability gap analysis, mitigating Lane and Johnson’s concerns about inappropriate analysis of problems and predetermined solutions.
Further, when combatant commands adopt off-the-shelf solutions to mitigate immediate capability gaps, they should have to explain how they will maintain and update these solutions in a sustainable manner. To be fair, there will always be a trade-off between mitigating an urgent gap as quickly as possible and ensuring long-term sustainability. A balanced solution would be to require a long-term lifecycle management plan after the gap has been filled, the capability proven, and the desire for its continued use established.
Second, capability development at the joint level must oversee and enforce corresponding development in the services. Many commentators call for comprehensive Goldwater-Nichols reform, generally concurring that congressional legislation is the only way to fundamentally restructure the military. Short of that, the Joint Staff should be empowered to fund the services to make capability changes in line with JCIDS recommendations. Time and time again, when given funding to fill capability gaps on their own, the services have retreated to defending parochial bureaucratic interests and culturally embedded priorities. For example, in the midst of the counter-insurgency and counter-terrorism wars of this century, the services did everything in their power to jettison irregular warfare capabilities as rapidly as possible. Empowering the Joint Staff to directly mandate capability changes would counterbalance these interests and ensure effective change.
The current processes for achieving consensus on JCIDS documents need reform. Rather than building consensus at the end while staffing a JCIDS document, that is, sending it out for review to all department components, consensus should be built throughout the study.
Study sponsors should be required to develop a charter for the study working group that includes subject matter experts from any relevant Defense Department agency. When appropriate, these experts should have the ability to override disagreements coming from higher levels of command authority based on their insight into the topic area, the progression of the study, and the development of a JCIDS document. While this may create a concern about subverting the command nature of the department, it seeks to deal with one of the existing death-knells for innovative ideas: ignoring the ideas of recently operational officers and enlisted personnel with combat experience. When they feel ignored, these bright young minds often choose to exit the service and take their valuable experience with them. Implementing reform to personnel management would help make future senior leaders more responsive and engaged on innovation matters throughout the Defense Department.
Finally, it is inane that the JCIDS process can be slowed or halted based on the critical comment of a random officer, civilian, or contractor. The Pentagon should curtail the services’ ability to override joint capability development, giving more voting power weighted to the geographic and functional combatant commanders — who actually employ the capabilities downrange. This should not interfere with the development of truly service-specific capabilities, but such needs should never outweigh joint needs.
We are aware that Defense Department culture is trapped between a genuine drive towards efficiency and bureaucratic inertia. Moreover, these recommended changes won’t fix all the problems with requirements development and defense acquisition. Still, they are a starting point for discussions on improving the speed and relevance of capability development by getting the right people, processes, and tools in the hands of the joint force. We are not making not pie-in-the-sky calls for scrapping wholesale current processes when alternatives are either unclear, underdeveloped, or impossible to implement. Rather, we believe our modest proposals are the sort of incremental reform that stands a real chance of providing tangible benefits to Pentagon processes. They are a solid foundation for the important debate over requirements and acquisition reform that the Defense Department needs to have.
Colin Jones and Alexander Kirss are defense consultants in Washington, D.C., specializing in capability and portfolio analyses with extensive experience in JCIDS. Mr. Jones earned his M.A. in International Affairs from the George Washington University while Mr. Kirss earned his M.A. in International Relations from the University of Chicago and is currently pursuing a PhD in Political Science at the George Washington University.
Image: Naval Air Systems Command