The Future of Air Superiority, Part IV: Autonomy, Survivability, and Getting to 2030

USAF-Twilight

Editor’s Note: Do not miss the first article in this series, “The Imperative” the second, “The 2030 Problem,” and the third, “Defeating A2/AD.”

 

We will require fresh thinking to control the skies of the future.  Gaining and maintaining air superiority in 2030 will require new concepts of operation.  It will require a rejection of platform-based thinking that yearns for a “silver bullet” solution.  And it will require airmen and joint leaders able to apply operational art across domains.  While these intellectual foundations are certainly the most critical aspects of success in 2030, it is also true that concepts of operation dependent on outdated technology will fail.  Any family of capabilities able to solve the 2030 problem will ultimately be comprised of platforms across all domains and from all services.  If airmen and joint leaders in 2030 lack key capabilities, it will not matter how skilled they are in warfighting or operational art.  The most brilliant commander today, equipped only with the technologies of yesterday, is doomed to fail in combat.

With that in mind, this final installment of this series expands on previous discussions regarding the key attributes of the air superiority 2030 family of capabilities.  It will also discuss some of the recommendations our team made with respect to force development and acquisition methodologies.

One of the attributes discussed in the last installment of this series was autonomy. The ECCT saw several uses for autonomous systems in assisting with data and network management. Many readers likely noted that I did not discuss autonomy more broadly, nor did I discuss whether or not our team foresaw future platforms in our air superiority force structure being manned or unmanned. The reason for this is relatively simple: Whether something is manned or unmanned does not provide capability in and of itself. Sometimes it makes sense to have a human present, sometimes it does not. In short, we were agnostic on the topic. If having a human onboard a particular platform makes it more effective, it should have a human on board. If humans limit the capability of a platform, they should be engineered out. Detailed analysis prior to and during the development of each particular capability within the air superiority family should determine the answer to the manned versus unmanned question. Nonetheless, some broad considerations and perspectives on this topic are worth discussing in slightly more detail to inform future assessments.

War is fought in an environment beset by fog and friction. Because war is a contest of wills, a fighting force will do everything possible to impose more fog and friction on its enemies. For millennia, military forces have attacked adversary command and control networks to do just that. In ancient times, that command and control network consisted of military messengers either on foot or horseback. Later, Genghis Khan’s homing pigeons passed information and orders across his empire. Later still, the complex ciphers and codebreaking of World War II would play a decisive role.

What does this have to do with manned versus unmanned flight? We can be sure that adversaries will attempt to degrade or deny our communication networks, whether the network that we pass information on or the network through which we exert command and control. In the context of platforms used for air superiority, the types and resiliency of the networks we use varies significantly between manned, remotely manned (i.e., piloted from a ground station similar to an MQ-1 or MQ-9), or autonomous systems. Remotely manned systems present the biggest challenges, as they require a high bandwidth of secure and reliable global communications. This is likely an untenable option for fighting in highly contested space. Even an agile, smart, and self-healing network cannot maintain bandwidth and throughput in the face of raw jamming power projected over short distances.

Counterintuitively, autonomous and manned platforms are similar in their bandwidth requirements. This makes sense when one considers that a manned platform is also autonomous, at least from the network’s perspective. The commander must order it to do its mission, but once so ordered, the autonomous brains on the platform — whether artificial or human — execute the mission on their own without the need for an elaborate or robust communication network reaching back to a ground station. This simplifies the problem of determining whether and where a human should be in the loop. The key question becomes: Where does it make sense to add autonomy? In other words, at what point in the mission chain are we confident artificial intelligence or algorithms will allow the machine to do more effectively or efficiently what humans have done in the past?

While this may seem a new question, it isn’t. For decades, fighter aviation has been constantly adjusting the point of autonomy. As the fighter community moved from guns to missiles as the primary air-to-air weapon, what it really did was assign part of the mission chain (targeting and killing, in this case) to an autonomous (albeit kamikaze) wingman. Early on missiles were short range, but today medium- and long-range missiles fly autonomously well beyond visual range. Moving the point of autonomy using concepts such as an arsenal plane, a longer-range air-to-air weapon, or an unmanned “loyal wingman” merely extends the logic fighter aviation has used since the advent of the missile age. Does an autonomous option for a platform or weapon fill a gap and provide capability? That should be the first question. And if the answer is yes, the next questions should be: What is the cost and what is the technical readiness? This is all part of detailed tradespace analysis conducted when planning for development of any complex weapons system.

Survivability is the second key attribute that must be evaluated as part of any capability development effort. For nearly three decades, from the earliest days of the F-117 over Iraq to the most recent employment of F-22s over Syria, stealth provided the U.S. Air Force a distinct operational advantage. As a result, many have come to regard survivability as synonymous with stealth. Others have argued that stealth is an outdated technology the Air Force should abandon. Neither perspective is correct; the truth lies somewhere in between.

Stealth is not dead, but it is also not the only attribute that contributes to the survivability of Air Force weapons and platforms. Survivability should be the true focus of analysis and discussion. This is a complex discussion, as aircraft signature, redundancy of onboard systems, speed, maneuverability, and electronic attack capability all interact to contribute to survivability. How a particular design implements and optimizes the tradespace between all of these depends on a host of factors, not least of which is the state of all the relevant technologies. Take the F-117 Nighthawk, for example. While the United States had made the critical breakthrough in technology needed to create a stealth fighter, the state of the technology was such that it was incompatible with supersonic speeds or high maneuverability. Thus, engineers focused their efforts on optimizing the F-117’s signature. It survived by being unseen, not by being faster or nimbler. Though the F-117 was nearly invisible to radar, it was restricted to flying only at night to avoid another key sensor in aerial combat — the human eye. Flying during the day could have resulted in the F-117 being seen by an enemy fighter pilot. Had that happened, it would have been nearly impossible for the Nighthawk to survive.

Fast forward 20 years to the early 2000s, and the F-22 Raptor found a different balance of attributes to maximize survivability. While stealthy, the F-22 is also fast and highly maneuverable. New technology available to U.S. engineers and adversary air defenses resulted in a radically new design. Combining “supercruise” capability with high-maneuverability, the F-22 can fight and survive in places the F-117 could not.

One other attribute affects survivability: lethality. To paraphrase the tactics manual I read as a young F-16 pilot, the best way to ensure you survive is to make sure the enemy does not.  This remains true today, and it complicates any discussion of the trades one must make when designing an aircraft. The depth of a magazine, the quality of sensors that allow more accurate targeting, and the effectiveness of weapons (kinetic or non-kinetic) all impact survivability. But one cannot have everything. Increase the magazine too much and the platform becomes too large to maneuver. Add too many sensors and the signature of the aircraft might be compromised. Optimizing this tradespace across the entire air superiority family of capabilities will require detailed analysis of all of these attributes.

This need for detailed tradespace analysis led our air superiority team to recommend to senior Air Force leaders that we abandon talk of a “sixth generation” fighter. Instead, we suggested the Air Force focus on defining the required attributes of a penetrating counter-air (PCA) capability. We took this approach for several reasons.

First, using the terminology of sixth generation risks getting into a discussion about what it means and how to define it. The barrage of questions that follow usually includes:  Is it hypersonic? How stealthy is it? Does it carry directed energy weapons? How high can it fly? Is it manned? These are all good things to know, but not in the context of defining sixth generation. When it started the Advanced Tactical Fighter (ATF) program in the 1980s, the Air Force did not set out to create a fifth-generation capability. Rather, it set out to create an aircraft that could operate in the expected operational environment of the early 2000s. Only after building the Raptor and seeing the tremendous advantage it provided did the Air Force conceive of it as a generational leap from F-15s and F-16s. Then, using the fifth-generation F-22 as a baseline, we began retroactively to classify older fighters using this new construct. Consequently, as a young F-16 pilot I was unaware that I was flying a fourth-generation platform. We only defined the F-16 (and F-15) as such after the Air Force adopted the fifth-generation paradigm to tell the F-22 story.

The other word we avoided in the discussion of PCA was “fighter.” While to some this is sacrilege, the rationale is sound. When we hear the word “car,” most people envision a four-wheeled enclosed vehicle, typically propelled by an internal combustion engine with a range of 200 to 400 miles and top speeds of around 120 to 150 miles per hour. We all possess mental models that define a car in that way. The same is true of “fighter.” In the modern context, most people have a mental model of a short-range, highly maneuverable, supersonic, manned aircraft, typically armed with a limited number of missiles and a gun. A future PCA may not fit this model. Part III of this series highlighted the importance of increased range. Payload is also important, as increasing magazine depth allows for greater persistence and improved lethality. Maneuverability and speed will be important, too, but may not fit our traditional definition of a fighter, either. In the end, I fully expect we will call PCA a fighter and give it an F-designation. But we need to be willing to challenge our assumptions and expand our thinking about how we balance the tradespace of any platform in the air superiority family of capabilities.

While the PCA has garnered much of the focus in the wake of the release of the Air Superiority 2030 Flight Plan, it is only one part of the solution to the air superiority problem of 2030. Several other air, space, and cyberspace capabilities will be critical to control of the air. As mentioned previously, the front end of the kill chain — the ability to find, fix, and track — was the most difficult part to achieve. While space and cyberspace capabilities cannot support this part of the kill chain on their own without air domain contributions, the inverse is also true. For instance, we have become accustomed to finding, fixing, and targeting ground forces by placing a remotely piloted aircraft overhead with full motion video. That will not be possible in highly contested 2030 threat environments. We certainly will still use airborne sensors to search for targets, but we will also use space- and cyberspace-derived information in near-real time to aid the targeting process. Using cyberspace to degrade an enemy command and control network or disrupt key enemy infrastructure may also be possible, though the nature of the cyberspace environment 15 years hence is extremely difficult to predict.

Future commanders will need to understand each domain and the capabilities it brings to the table as they make decisions to apportion their forces. How will the Air Force develop that future commander? What set of education and experiences do future commanders need to succeed in the 2030 operational environment, and how can the Air Force provide these? The answers to these questions could potentially affect tremendous change in professional military education, career paths, and leadership opportunities. We must start now to develop those airmen. The majors and lieutenant colonels of today are the senior general officers of 2030, and they will need this knowledge and experience to effectively employ the multi-domain capabilities in the field by 2030.

This leads to a final question: How do we get capabilities to the field by 2030? That is only 13 years away, after all. Under traditional acquisition approaches, most major defense programs take many more years to complete. Many others have noted some of the shortfalls with defense acquisition, and I will not repeat them here. Correcting these is an increasing area of focus for the Department of Defense, the services, and Congress. Often the reason citied is a need to be better custodians of taxpayer dollars, or to eliminate waste. While I personally appreciate that rationale as a taxpayer, as an airmen, I would add another. Namely, if the Department of Defense does not change its acquisition approach, our capability development will be outpaced by others around the world. We are already behind in many areas, and we must act now or our remaining technological advantages will continue to erode. Thus, to the fiscal imperative we must add an operational imperative:  We must improve our ability to develop and field capability in the information age or we will not win the 2030 fight.

Our team recommended four tenets to increase the speed of capability development. First, requirements discipline — the ability to know the basics of what you need and stick to those basics. Overly complex requirements or changing requirements create instability and start a cycle of delays and cost increases nearly impossible to break. The initial change in requirements drives an increase to development and delivery timelines, as additional engineering and testing must now be built into the program. That change to delivery timelines pushes the fielding of a capability to the right. As that timeline extends, the projected threat environment changes, incentivizing additional requirements changes to meet the evolving threat. And the cycle repeats itself.

A far better approach is to stick to a basic requirement up front while building into the design enough margin to modify and add capabilities over time. A positive historical example in this regard is the F-16. Originally envisioned as a daytime visual-flight rules (VFR)-only fighter, John Boyd held ruthlessly to this basic requirement. After fielding, however, the F-16 evolved from flying daytime VFR-only missions to low-altitude night missions with laser-guided bombs and suppression of enemy air defense missions as a SAM-suppressing and killing Wild Weasel. As the world changed, so did the Viper.

Second, the Air Force should reinvigorate the concept of parallel development. This centers on the idea that there are various technological development cycles that are not naturally synchronized. There are industrial development cycles for components such as aircraft outer mold lines, spacelift, and engines. These items sometimes take a decade to advance. There are also hardware development cycles, which generally follow Moore’s law. CPUs, sensor arrays, and other apertures are on this typically two to five-year cycle. Finally, there are software development cycles that run in minutes or months. The idea behind parallel development is to mature each of the component of a spacecraft, aircraft, or cyberspace tool under development in a separate line of development and outside a formal program. Once a technology reaches the appropriate level of maturity, it then can be ported out of that parallel line of development and integrated into a program. Meanwhile, the tech development line continues working the next iteration of capability to ready it for future use.

Done correctly with consistent funding and focus, parallel development can significantly reduce the technical risk found in any program. The F-117 is a good example of this technique in action. Effort on stealth technology had progressed in one line of development, advanced flight controls in another, and various other subcomponents came from yet others. Once the technology was mature across all of the required systems, it was brought together into the F-117 program. This allowed the Air Force to more easily manage the risk. As technical risk had been decreased outside the program, what remained was integration risk. While non-trivial, the program brought no unnecessary risks into integration by using mature and in some cases fielded subcomponents.

Third, the Air Force should manage integration risk. Again, this is not a trivial task on a complex weapons system. However, prototyping and experimentation provide an elegant solution. The F-117 did this correctly by building an essentially fieldable prototype before entering its limited production run. More recently, the F-22 program began with a flyoff between the YF-22 and YF-23 prototypes. In truth, these aircraft were mere technology demonstrators rather than true prototypes, similar to the X-planes developed at the outset of the F-35 program. They did not contain all of the systems and sub-systems the final versions of their planes would need. For prototyping to truly work, we must move beyond the technology demonstrators these program used, and instead truly integrate the subsystems onto the capability we are trying to field. Only then can we evaluate whether or not it does what we need it to do.

Once that evaluation of the prototype is complete, it is time to decide whether to declare a program and invest in the long-lead items needed for production. In the event a production decision is made, program managers must hold fast to stable requirements. Parallel lines of technology development will have progressed, tempting operators and developers to adjust requirements. Do not succumb to this temptation. Maintain requirements stability and instead include newly developed technologies in later increments or blocks. Importing these into the baseline aircraft post-prototyping will only delay and derail the fielding of capability. If, on the other hand, you decide that the prototype does not provide enough of an increase in capability to warrant production, apply the learning from prototyping to the next iteration of development.

Even in the first case, when a decision is made to enter production on the first prototype, continued technology development and planning for follow-on increments and blocks must continue. This was the ECCT’s fourth recommendation: take an incremental approach to capability development. As technologies mature through parallel development, so should several different prototyping phases. These will likely result in multiple blocks or increments of capability within a single program. As new blocks or increments enter to forces, older ones must be repurposed or retired. As the pace of technological change increases, we should expect the pace of the change in our force structure to increase as well. Keeping capabilities of any kind in military inventories for decades invites irrelevance. Sustaining old capabilities also ties up significant resources as operating costs increase over time. We must develop, test, field, and retire capabilities on a much faster cycle than in the last several decades. We must invest in the future rather than sustaining the past.

A similar pace of technological change to what we are experiencing today occurred in the 1950s and 1960s, leading to the fielding of the “century series” of fighter aircraft. During this time period, the pace of change was not driven by Moore’s law, but rather by Bernoulli’s. Aerodynamic engineers applying the Swiss mathematician’s equations were rapidly learning how to build more effective and efficient airfoils, allowing them to build aircraft capable of greater speed, range, and maneuverability. The rapid pace of advancement required the constant fielding of new aircraft to keep pace with technology. Every five to seven years, we fielded new platforms that could fly higher, faster, and further.

Development of a PCA aircraft and other air domain capabilities needs to adopt this mindset lest we continue to fall behind. But this is not just an air domain issue. Maintaining relevance in cyberspace certainly will require rapid fielding in response to technological change. Furthermore, as the ability to launch cubesat and nanosat capabilities matures, we must look to emulate the “century series” fighter mentality in the fielding of space capabilities, as well. We cannot accept using industrial-age acquisition timelines in an information-age world.

As I noted in my opening article in this series, building the force to achieve air superiority in 2030 will take time, effort, and sustained commitment. Even though technology and platform development are not a panacea, a focus on the fundamentals of capability development, maintaining requirements discipline, and using an acquisition game plan that leverages experimentation and prototyping are prerequisites to success.  Pairing these acquisition and development techniques with new concepts of operation and developing of airmen and joint leaders with the ability to leverage the strengths across all the domains will get us there.

As I said at the outset, air superiority is not an optional capability. Without it, we will lose.

 

Alex Grynkewich is a Brigadier General in the U.S. Air Force and an F-16 and F-22 fighter pilot.  He most recently served as the Chief of Strategic Planning Integration at Headquarters Air Force and as the Air Superiority 2030 Enterprise Capabilities Collaboration Team lead.

The opinions expressed above of those of the author, and do not necessarily reflect the views of the Department of Defense or the U.S. Air Force.

Image: U.S. Air Force photo by Yasuo Osakabe