Building R2-D2

F22Cockpit

When Star Wars debuted in 1977, it marked a major departure from the depiction of sci-fi robots. R2-D2 was a cylindrical, round-headed, three-legged “astromech” droid that communicated in whistles, although it understood human speech. As the trilogy progressed, several things became clear — the little droid was brutally mission-focused, unfailingly loyal, flat calm under pressure, and, if it can be said, more than a little sneaky. These are all traditionally human attributes.  It was also versatile, competent and reliable, and not at all shy or deferential.  

R2-D2 was no mere automaton — it was an artificial intelligence wrapped in an independently mobile shell with a variety of capabilities. Most importantly, R2-D2 was versatile enough to form both a machine-machine team and a human-machine team. Fiction notwithstanding, the depiction of an R2 unit offers a viable model for the creation of the AI crew member, and how an artificial intelligence “crew member,” somewhat like  R2D2, could fit into a modern squadron. 

Along with the emerging potential of AI comes the inevitable pursuit of it for aircraft applications. The discussion covers the full range of possibilities, from smart calculators or “decision aids” to fully autonomous aircraft “brains.” Indeed, automated systems to assist humans made their appearance in 1912, nine short years after the Wright brothers’ first flight, with Luke Sperry’s introduction of the autopilot into a Curtiss C2 biplane. Analog, and later digital, technology allowed aviators to turn more tasks over to a computer, including navigation and the all-important terrain avoidance task. But the fully autonomous aircraft capable of executing complex tasks is elusive and likely to remain so.

But the history of aviation development in the last half-century has largely been one of incremental improvements, and some kind of AI might be desirable for F-15EX or future aircraft, introducing new capabilities into a human-machine team. In effect, engineers and technologists envision the addition of a third crew member to the F-15EX, building the real-world equivalent of Luke Skywalker’s venerated droid. But in order to do that, there must be some understanding that AI is not a magic application that can be seamlessly inserted into a combat aviation enterprise.

The March of Automation

Automation has made steady progress in aircraft development. The autopilot was so easily and universally accepted that it was often referred to as “George,” as if it were an aviator itself. Among the systems that were used to improve the capability of the aircrew was terrain-following radar, introduced in the early 1960s, which paired the autopilot with a radar and allowed safer flight at low level. A number of fighters from the Century Series had a version of this kind of radar, as did the RF-4C and the F111. The F-111A’s AN/APQ-110 analog system allowed for automatic (hands-off) terrain following at low altitude and high speed — when upgraded to the digital AN/APQ-115 it allowed for flight at 200 feet at supersonic speeds. The B-1, MC-130, and F-15E are among aircraft that use this type of radar today. As autopilots became more advanced, fly-by-wire aircraft like the F-16 allowed for tighter coupling of the automatic system and the flight controls. 

Accordingly, the F-16 was the first to get an Automatic Ground Collision Avoidance System, which is capable of taking control of an aircraft to recover it when the computer’s model indicates that terrain impact is imminent. This system has already saved several pilots. Today, even trainer aircraft like the T-6 have a flight management system that greatly simplifies tasks associated with basic and instrument navigation. The F-35 has an automatic system that allows the aircraft, like many airliners, to fly an instrument approach mostly by itself. The extensive use of automated systems in modern aviation follows a long period of use that has proven their safety and utility, potentially paving the way for the next step —  the AI  crew member.

An F-15E at low altitude on the North Carolina coast on Sept. 3, 2020. Terrain-following radar allows this profile (and lower) to be flown in complete darkness. (U.S. Air Force photo by Airman 1st Class Kimberly Barrera)

A Smarter “George”

Terrain-following radar and the Auto Ground Collision Avoidance System, as advanced as they are, are merely forms of automation. Both are discrete-event controllers, programmed in advance to accomplish a very limited task set under specified parameters. Their function is fixed — any change in capabilities requires new software to be written by a programmer. 

Enter the AI. Modern AI is not, alas, nearly as capable as George Lucas’ creations in 1977. The kinds of problems suited for AI applications are those that require math —  lots and lots of math, done quickly. But there are tasks in aviation that could be greatly enhanced if a hyperthreaded, silicon-based math savant were present in the cockpit. Terrain-following radar can see what is in front of it but is limited by the profile of the next hill. In the future, AI might use a store of digital data to characterize the terrain over the next hill in order to plan an evasive path through it. The AI might act as a decision aid, draw fine details out of sensor data, manage defensive electronic warfare systems, or build and maintain an accurate picture of both friendly and adversary forces. Many of these things are today done by humans, but it may be that some of them can be done faster or better by a machine, which would free up the human crew to manage more of the mission and less of the mission overhead. The machine itself cannot do all of the tasks required for combat aviation, but the known capabilities of today’s AI could be leveraged to create a very powerful human-machine team if the team can be effectively formed. 

Training R2-D2

In simple terms: when computer scientists talk about training an AI, they are describing a process that is only loosely correlated to training humans. For a human, training material is often experience, which has a similar effect in adjusting the human’s future performance. For an AI, the training material is data, which is used to adjust the algorithms that govern the AI’s function. Both of these methods are events focused on the training of the individual agent. Training a team is something else entirely.

Effective teams coalesce based on a number of factors, one of which is shared experience. A crewed aircraft is operated by individuals who have gone through a training pipeline that is standardized and shared. In my experience as an instructor, everybody gets to the same place by the same route, which gives some level of shared experience, even when the team members have vastly different experience levels. Once the team is formed, they train together to learn about individual capabilities. This shapes the team to maximize individual strengths while offsetting weaknesses. In order to incorporate a machine into the team (particularly one with a combat mission), the machine is going to have to follow the same process as a human might. Without such training, aviators will not trust the AI, and rightfully so. As with terrain-following radar and the Auto Ground Collision Avoidance System, aviators may not risk their lives with unproven technology. The only way to prove the effectiveness of the team is to prove to the aviators that the technology works – and to do it on the aviator’s terms, not the computer scientist’s. Thus, the human-machine team will have to train as a team for the mission(s) they are preparing to execute. Critically, in order to be an effective team, the human and the machine are going to have to communicate effectively.

Voice Communication

In aviation, voice communication is king, both inside and outside the cockpit, and has been for more than 50 years. While the introduction of various forms of datalink, from NATO’s Link-16 to civil aviation’s automatic dependent surveillance-broadcast (ADS-B), has reduced some requirements for voice communication, they have by no means eliminated it. That’s because a datalink shows a partial picture of the airspace around the aircraft, but does not communicate intentions or issue instructions. Even the picture is incomplete —  a Link-16 picture shows cooperative, link-equipped friendly forces (adversary and neutral forces have to be detected and placed on the display), while automatic dependent surveillance-broadcast shows only other aircraft equipped with the same technology or air traffic relayed by Federal Aviation Administration radars through the traffic information service. Neither datalink expresses intent or issues instructions. The manner in which an artificial crew member is integrated with future aircraft could lead to changes in how aircrew communicate, which will also be an incremental change based on operational experience.

In a fighter aircraft, Link-16 can show the crew the position and status of the flight. If the flight lead wants that flight to do something, then those instructions are issued through voice using a very specific language. Because radio communications are half-duplex (two-way communication where only one user can talk at a time), radio channels can become congested very quickly. Often, an entire strike package operates on a single radio frequency (or net, if anti-jam is used), and strike packages as large as 60 aircraft are not unheard of. Accordingly, NATO aviators use “brevity code” to cram the maximum amount of data into the fewest number of words. For example, an instruction by a flight lead (Panther 1) to his number 3 and number 4 aircraft (Panther 3’s element) to target a specified group of aircraft with AIM-120 missiles using launch and leave tactics is compressed into a call sign and four words: “Panther 3, target south group, skate.” A lot is left unsaid because Panther 3’s crew will no doubt recognizes the flight lead’s voice and knows who issued the task. Panther 3’s acknowledgment follows immediately afterward with one word, “three,” and now everybody on the frequency knows that the group of hostile aircraft is targeted and by whom. Panther 3’s intentions are assumed — that element is going to follow Panther 1’s command. All in seven words, total.

All of this means that a great deal of critical, real-time information is available on voice and a notional R2 unit is going to have to understand voice communications in order to share a common understanding with the human member of the team. In the absence of an understanding of what is occurring on the radios (or in the cockpit), the machine team member will lack the necessary level of situational awareness to tailor its actions to the tactical situation. One of the key constraints on the human-machine team is that the human cannot afford to spend time continuously updating a machine’s situational awareness. It is essential that the human-machine interface minimizes the human workload rather than increases it.

A machine that can keep track of the voice flows could prove invaluable in a number of situations. A machine might keep a running tally of targets successfully hit by other members of the package. In a close air support or combat rescue fight, the on-scene commander will often assemble a “stack” of friendly aircraft (so-called because the aircraft are assigned different altitude blocks to guarantee deconfliction). Each of these flights has a different ordnance loadout and endurance and keeping track of the mass of aircraft is currently a pencil-and-kneeboard exercise. But an AI that can listen to the check-in calls, keep track of the flights’ fuel and ordnance loads, and display the stack graphically would free up a lot of human cognitive capacity and reduce the amount of time that aircrew spend heads-down in the cockpit.

Voice is a logical way to interface with and direct an AI, in the same way it is used to interface and direct with human teammates. In World War II, when multiplane aircraft became common, even intercom calls were prefaced with positions (“Pilot to nav,” for example). Today, the intercoms are clear enough that who is talking to whom in a multiplane aircraft is often derived from context. In a similar vein, it would be best if the AI could determine from context whether a voice command is intended for internal or external action. This might be as simple as knowing whether the radio’s push-to-talk switch is actuated, although sometimes a single communication is both internal and external. Using the same scenario above: “Panther 1’s targeting the north group” lets both the wingman and the weapons systems officer in the back seat of the jet know the flight lead’s intentions. In the near term, it might be necessary to follow Apple’s voice command path for the iPhone’s Siri function (I suggest “R2” instead of “Siri”), but contextual extraction of voice commands should remain a goal.

It also goes without saying that as evocative as R2-D2’s whistles were, voice communication back from the AI would be somewhat superior to modulated tones. Voice warning has been used in aircraft for decades, although no voice warning system has an external prioritization sequence that tells it when it is appropriate to interrupt and when it isn’t. (“Pull up”, or “FIRE, FIRE” are clearly more time-critical than “fuel low”). As with the radios and intercom, the architecture is there for AI designers to capture.

Conclusion

Adding AI into aircraft is a logical extension of the automation trend that Sperry started over a century ago. Moving from pure automation to a human-machine team where one member of the team is an AI is a challenging endeavor that may soon be within reach. In order to develop an effective human-machine team, utilizing architectures and techniques that are already in place for human-human teams will simplify the introduction of AIs for their human team members and likely lead to earlier acceptance. Any struggle to communicate will complicate and delay the effective utilization of AI team members and could derail it entirely. But a fictional character in a classic space opera illustrates a potential way ahead for effective human-machine teaming and should serve as a model for the necessary performance and control interface. The R2 unit has been thoroughly illustrated —  we need only build it.

Mike Starbaby” Pietrucha (Col, USAF, Ret.) is an experienced fighter/attack aviator with over 1700 flight hours and 156 combat missions in the back of the F-15E and F-4G, granting some distant kinship to an R2 unit in an X-wing.

The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government.

Image: U.S. Air Force photo by Lt. Sam Eckholm