The Dawn of Artificial Intelligence in Naval Warfare

NAVYAI2

The U.S. Navy is investing real money to integrate artificial intelligence (AI) into the force, requesting $62.5 million in the FY19 Defense Department budget for AI and rapid prototyping. As the technology matures, the Navy needs to adapt by displacing human intelligence in roles for which AI is better suited while being aware of the many roles in which human intelligence will still have an edge. The Navy should identify candidates for automation where, relative to human intelligence, AI is likely to be increasingly fast, agile, or low-cost. But leadership should also understand where AI isn’t likely to be applicable and comprehend the implementation difficulties the Navy faces relative to other government and commercial organizations. Discussions around AI need to evolve from “we need it” to “this is how we get it.” It’s time to think about some fundamental concepts Navy decision makers need to know to help get AI right.

The Navy should invest in these capabilities for tasks with rules or patterns that are predictable and difficult to disrupt, and should avoid automating tasks with rules and patterns that change unpredictably. The service should focus on collecting data efficiently, finding effective communications pathways in the absence of reliable internet access, especially at sea, and customizing already available algorithms for naval purposes. More broadly, leadership should foster trust in these new capabilities through transparent and deliberate acquisition processes and by making clear to the rank and file how human and artificial intelligence will work together in future combat.

Which Tasks is Artificial Intelligence Suited For?

The pursuit of “perfect AI” is a fool’s errand. To paraphrase George E.P. Box, “Essentially all AI is wrong, but some is useful.” Useful AI generates good enough results at least a little faster, better, or cheaper than those produced by human intelligence. AI isn’t inherently better or worse than human intelligence, but, depending on the task, can perform considerably better or worse than human intelligence

Algorithms can be trained to complete narrow, specific, anticipated tasks with describable and predictable rules. When those conditions are in place, algorithms will improve faster than any sailor. This makes tasks involving repetitive and manually intensive training likely candidates for automation. AI can already outperform humans at many tremendously complicated tasks that meet these criteria. In 2017, for the first time, AI was able to consistently beat professionals at heads-up (one-on-one) poker, a game involving hidden information. AI overcame the “fog of war” inherent in the game by recognizing and taking advantage of winning patterns by self-generating extensive data on poker hands. In poker, pattern recognition is useful—thus, AI provides an advantage over humans. Naval tasks that meet these criteria and could benefit from AI include scheduling combat logistics force replenishments at sea and planning daily aircraft routing for amphibious-ready groups.

Although “narrow” AI can learn to train, it can’t learn to think. Thinking AI, or artificial general intelligence, exists only in science fiction. By contrast, narrow AI can find correlations in data but can’t actually comprehend its own actions. Both humans and AI can accumulate experience, knowledge, and skills, but only humans can put them into context. This is why IBM’s Watson, which was successfully trained to win Jeopardy, can’t engage in the sort of unstructured learning essential to earning a college diploma. Because AI can’t learn to think, even as the capability matures, human intelligence will remain more useful for tasks with indescribable or unpredictable rules. Imagine a poker competition where, instead of playing variants of poker with fixed rules, players are allowed to change rules each hand to make up new games. Unpredictable rule changes fool even the best current AI approaches because they can’t put them in context without human help. The future of man-machine teaming involves AI performing the repetitive drudge work teamed with humans to think through unpredictable tasks. Autonomous cars are a good example—their autopilot systems can be trusted in predictable driving conditions, but when the unanticipated occurs, they can’t adapt without human oversight. Autonomous naval ships and aircraft will be no different.

Automated pattern recognition is an important subset of AI that is becoming integral to many widely used AI applications. Pattern recognition, once automated, is useful for rapidly conducting intensive pattern searches in large amounts of data and can often produce faster (and cheaper) results than traditional statistical techniques. Typically, the most labor-intensive part of automating a pattern recognition process is preparing the data that essentially “trains” a machine learning algorithm to recognize patterns. Before algorithms can be trained, they require large amounts of clean (error-free), organized data. The dirty secret of machine learning is that human labor is often the only way to efficiently clean training data. For example, before a machine learning algorithm can be trained to find a warship in an image or video, humans first have to categorize thousands of images of warships. However, once trained, a machine learning algorithm typically only needs new training data when the patterns of interest change.

However, automated pattern recognition can be highly susceptible to small changes in patterns and can be deceived by opponents with the ability to disrupt those patterns. The pattern recognition software trained to recognize a warship should not be trusted to reliably recognize a vessel whose visual characteristics can be changed in unanticipated ways. The Navy should only use automated pattern recognition in situations where the patterns are less vulnerable to adversary manipulation. On the other hand, for stable, difficult-to-disrupt patterns, automated searches are often useful for generating valuable insights at speed.

What the Navy Needs

Beyond broad conceptual recognition of the opportunities and limits of naval AI, building and integrating these systems will require four things: data sources, communication paths and databases, algorithms, and interfaces. While addressing these four requirements during the acquisition process does not guarantee the resulting capabilities will be useful, ignoring any of them will make it significantly more likely that the resulting AI system will not be useful.

Access to the right data is necessary for any successful naval applications of AI. For the Navy, the relevant data is often stored in databases that are difficult to access. The long-term solution may involve more integration of the Navy’s legacy databases or transfer of databases to a “cloud” or “data lake.” More value can probably be unlocked faster by simply making the Navy’s legacy databases more accessible. But because data requires resources to collect, even organizations that have efficient data environments must be selective in the data they choose to gather.

The challenge is not just about data access, but about the economy of that data—the value versus the price of acquisition. In naval warfare, data is expensive because the data sources and communication paths required to collect and transmit data are expensive. Other services face similar problems: Last year, the Air Force cancelled its Air Operations Center 10.2 contract to convert “raw data into actionable information that is used to direct battlefield activities” after project costs surged from an original $374 million to $745 million. The Navy should explicitly define which data are necessary for generating the desired information for specific AI applications, and determine how much that data will cost. Unnecessary additional features can become unexpected drivers of data requirements. By weighing the costs and benefits of its data early, the Navy can pay only for the data it needs.

Another challenge facing the Navy’s AI efforts is that, at sea, it can’t take advantage of the speed, high bandwidth, and low cost of the internet as its primary communications path. As such, the service will remain dependent on communication paths that are extremely expensive, manually intensive, “stovepiped,” and low-bandwidth, such as radios and data links. To make matters worse, adversaries are expected to contest those paths. The Navy’s continued reliance on relatively tiny streams of expensive data will be a challenge even aside from issues surrounding AI.

The Navy, not contractors, should own all data and analysis generated from it. Relinquishing that ownership will prevent the service from switching contractors without risking data loss. For example, Google recently announced that it would not be renewing its contract on Project Maven, a program designed to automate object recognition in imagery acquired from drone surveillance. The Defense Department owned the data, though, so several other capable AI companies are likely poised to step into Google’s shoes.

The Navy’s costs to acquire algorithms should remain low relative to the costs of sensors, communication paths, and databases. Decision makers should think of algorithms as things to be customized, not built from scratch. A handful of problems will be exceptions and require completely new algorithms, but for many tasks, analogous algorithms capable of quickly providing insights with minor modifications are already freely available from academia and industry. Additionally, the most widely used AI software, including TensorFlow, is free and open-source. Free software and the same widely published algorithms—such as the sort used by UPS to route its delivery truck fleet—can, with minor modifications, be used to create a real-time naval weapon-target assignment plan.

As AI applications grow in the Navy, it will be critical to monitor and exercise oversight of contractors. Defense contractors will have to make implicit ethical choices in their algorithm designs and selection of training data. The Navy can provide honest governance and oversight of those choices by having its own algorithm experts set rules for contractors to follow because, if contractor-produced algorithms go wrong, it is the service itself that will be held accountable.

Finally, AI systems require good interfaces to efficiently connect actors, human and machine, to timely, understandable results. Without usable interfaces, information generated by algorithms can’t be acted upon. Beware: Sleek interfaces may be used to mask terrible AI!

A Final Element: Building Trust

For the Navy, too much trust in AI is likely to result in more harm than too little trust. Still operators may have cause for skepticism. Once integrated with naval weapon systems, AI applications will have potential to do enormous damage quickly. Leadership shouldn’t be too quick to conclude that sailors “don’t get it” if they raise questions. Currently, most AI applications involve techniques that are not easily explained. In commercial AI products, explainability may be less important as long as the algorithms work. How does Uber assign drivers and determine routes? how does Google generate search results? How does Tesla’s autopilot work? Many AI consumers don’t know. Sailors shouldn’t be expected to be experts on algorithmic techniques either, but they will need a stronger understanding of AI’s capabilities and limitations. Designing naval AI acquisition with understandable benchmarks that describe a “good enough, fast enough” solution will help build trust in the capabilities as those benchmarks are achieved.

Tasks the Navy should start to automate include dynamic frequency allocation in communications and electronic warfare plans, real-time shipboard weapon pairings to swarming threats, and coordinating swarming systems to efficiently target distributed moving contacts. In these cases, AI could provide solutions comparable in quality to those currently produced through staff work. However, AI solutions would be available orders of magnitude faster than manual solutions and could quickly update as conditions change. Algorithms to perform each task are already freely available from academia, the rules of each task are described in detail in Navy doctrine and tactical publications, and the Navy already collects the necessary data to perform the tasks manually. Automating these tasks would be useful because the Navy’s existing manual command and control decision structure is so slow that it risks being overwhelmed in a real fight against a capable, swarming adversary.

AI is both imperfect and useful. It can be used to unlock great value in some tasks while it may be useless, or even dangerous, in others. Useful naval AI systems will require data sources, communication paths and databases, algorithms, and interfaces. Tasks involving repetitive and manually intensive training will be increasingly automated, and tasks with indescribable or unpredictable rules won’t. The Navy will still need educated, thinking sailors who aren’t easily fooled by changing rules and patterns. By teaming human and artificial intelligence the right way, the Navy can create a more lethal fighting force, fit for the future of naval combat.

 

Lieutenant Commander Connor McLemore is an E-2C naval flight officer with numerous operational deployments during 18 years of service in the U.S. Navy. He is a graduate of the United States Navy Fighter Weapons School (Topgun) and an operations analyst with Master’s degrees from the Naval Postgraduate School in Monterrey, California and the Naval War College in Newport, Rhode Island. In 2014, he returned to the Naval Postgraduate School as a Military Assistant Professor and the Operations Research Program Officer. He is currently with the Office of the Chief of Naval Operations Assessment Division (OPNAV N81) in Washington D.C.

Lieutenant Hans Lauzen is a Navy Information Professional officer, and currently is serving as an assured communication analyst within the Office of the Chief of Naval Operations Assessment Division (OPNAV N81) in Washington D.C., where he interprets scientific studies and wargame results to guide strategic investments. He previously has served as a surface warfare officer. He is a candidate for a Master’s Degree in Business Administration at the University of Virginia.

The views expressed here are theirs alone and do not reflect those of the U.S. Navy.

Image: U.S. Naval Research Laboratory

Do you like our articles?

Then you'll love this job opportunity! War on the Rocks is hiring another full-time editor. Help produce the articles you love to read.