Marines, Algorithms, and Ammo: Taking ‘Team of Teams’ to the Contested Littorals

marines

 

 “The one who becomes the leader in this sphere (artificial intelligence) will be the ruler of the world.”
–Russian President Vladimir Putin 

“AI has become a new focal point of international competition. AI is a strategic technology that will lead the future …”
–China’s New Generational Artificial Intelligence Development Plan  

“Ten years from now … if the first thing going through the door of a breach isn’t an unmanned system, then shame on us.”
-Col. Robert Work, U.S. Marine Corps (ret.), U.S. Deputy Secretary of Defense, 2014-2017

13 Days

It’s Oct. 21, 2020, 13 days before the U.S. presidential election. Tensions in the South China Sea escalate when a U.S. destroyer strikes a Chinese commercial vessel in “international waters.” Unbeknownst to the American ship’s captain, this commercial vessel is under Chinese government contract, full of military supplies, and under detached escort. The situation deteriorates dramatically when the People’s Liberation Army-Navy detains 19 Americans who went overboard after the collision. China initially stated that the sailors were rescued only later to refuse to return them after claiming they were participating in an illegal intelligence mission. At least 25 Americans are missing at sea, along with sensitive military equipment. China refuses U.S. requests for American ships and aircraft to participate in the rescue effort. The president directs a rescue effort regardless. The first U.S. naval vessels to arrive on-scene are met with multiple volleys of near-miss anti-ship cruise missiles. China sends back-channel messages to the White House making clear that further aggressive actions will be met by missiles that “will not miss.”

It’s now 10 days before the election. Public opinion polls show an overwhelming majority of Americans expect the president to take a stand, including using U.S. military force, if required. The president orders the U.S. military to establish sea control around the search area so that the rescue efforts can proceed unimpeded. Given the geographic challenges, meeting the president’s intent requires establishing multiple land-based expeditionary advance base sites to support naval assets at sea. Messages are sent to China’s leadership to ensure forthcoming American actions are understood. Receipt is acknowledged, with a warning: “We will not miss this time.”

It’s now seven days before the election. The U.S. joint task force maneuvers into the objective area. China tracks every vessel from its point of departure using commercial satellites overhead, paired with remote meteorological sensors through a computer vision and artificial intelligence tracking system. To the task force’s surprise, nothing happens. Neither the limited range and endurance combat air patrols off the carriers, nor the few F-35s on the amphibious ships, detect anything out of the ordinary. The marines load up into their new amphibious vehicles and start swimming ashore. All appears quiet at their assigned expeditionary advanced base objectives. Only a few Chinese drones are flying, but at great distance, and outside U.S. missile reach from the objective area.

Then it all hits at once.

Long-range anti-ship missiles, a couple of hypersonic drones, a salvo from China’s integrated air defense system, and an airborne wall from the Chinese Air Force with drones in front. Rocket-delivered swarms of more than 1,180 drones each attack the amphibious vehicles while mobile anti-ship cruise missiles and long-range hypersonic missiles saturate the entire task force. Centralized command and control systems come under a barrage of cyber-attacks. In the hour-long chaos, thousands of U.S. sailors and marines perish and America’s amphibious capability is destroyed.

Far-fetched? This is Adm. (ret.) James Stavridis’ coming maritime hybrid threat amplified.

It’s certainly not difficult to imagine misinterpreted actions in the South China Sea, or a missile test gone bad, catalyzing nation-state conflict the likes of which the world hasn’t experienced in over 50 years. Should this happen, what ought the Marine Corps be able to provide our nation? We recently addressed this question in an article titled “Open Your Eyes and See the 21st Century MAGTF.” A fellow marine, Ian Brown, retorted with a foreboding tale, “When the Unblinking Eye Closes: Digital Feast and Famine in the Marine Corps.” We’re grateful for his response.

We were arguing two main points: first, and still of unparalleled importance, was continuing the decade-plus overdue push towards the Marine Corps’ adoption of Guardian Angel” drones, to which Brown concurs. Second, we sketched out how a 21st century Marine Air Ground Task Force would operate together, including employing semi-autonomous and autonomous systems, with scenarios similar to the one above in mind. The future of the Marine Corps hinges on pivoting toward an agile networked, manned-unmanned “team of teams” structure. This evolution is fundamental to preserving the unique functioning of the Marine Corps in serving our nation successfully in the future. What began with Ben Brewster’s courageous call for a Marine Corps “Grunt-Angel,” has now evolved into significant concerns over the successful employment of the naval expeditionary force writ large, especially for littoral operations in contested environments.

The Fusion: People and Ideas and Hardware

Before addressing the scope of this problem, we need to underscore that we too are Col. John Boyd disciples when it comes to the spirit of “people, ideas, and hardware — in that order.” But just because we prioritize people, ideas, and then hardware doesn’t mean that we must consider them independent or even sequential. Instead, to properly prioritize people, the whole organizational structure must subsume people, ideas, and hardware in concert. Brown’s primary critique suggests that we have (mis)prioritized machines over people, but this is not just a misreading of our point, it rings hollow. Optimization is based on interdependence: people with ideas, manned and unmanned teams, hardware programmed with innovative software, among others. What was binary and sequential is now persistent superposition: people and ideas and things together, as a “team of teams.”

Consider this fusion: “virtual gyms,” to enable training smaller units that can operate in distributed environments. Not only was this one of Brown’s constructive critiques, but in a previous Marine Corps Gazette article about Gen. (ret.) Bob Scales’ new book, two of us argued, “much more needs to be done to provide what Scales describes as ‘virtual top guns’ for every infantry sergeant and lieutenant, as well as their units.” When people, ideas, and hardware are considered separately, disparities in prioritization emerge such as the one we highlighted when ruefully asking: “Does our Corps really care almost 76 times more about the F-35 than it does taking our infantry squads’ and platoons’ realistic training opportunities to the next level?”

Additionally, over the past 24 months all four of us have focused on enhancing our small unit drone capabilities as part of the team spearheading our service’s “quads for squads” effort. Since, as one of us stated publicly, “air superiority is not a good assumption anymore from the ground,” we must prepare for situations when a “Guardian Angel” drone or a significantly more expensive manned observation and close air support platform is not immediately available. These small quads provide squad and platoon leadership with their own layer of organic aerial observation. Plus, a related loitering munitions effort will provide these units their own integrated aerial strike capability. These are not substitutes for conventional aviation support integration, but indicative of how we can prioritize people by leveraging ideas based upon current and emerging hardware and software capabilities.

Such fusion is not just occurring at the small unit level, but also at the institutional level. As “Feast and Famine” indicates, and as one of us wrote about separately, potential adversaries will challenge the MAGTF’s ability to use the communications spectrum. This is why, in “Open Your Eyes,” we expressed excitement over 2,500 to 3,000 information warfare marines joining the Marine Corps’ ranks. Like artillery marines firing counter-fire missions or infantry units locating and killing the enemy, we envision information warfare marines fighting to protect and leverage our networks, constantly, something our commandant has stressed as necessary in the “violent, violent fight” ahead.

Yet, what the commandant fears most is not the fight itself, but our institutional ossification and fear of change. “40-second Boyds” won’t work for these missions, the MAGTF’s information warfare marines need to be able to fight perpetually. But when “our Corps has failed to retain the best and brightest aviation electronic warfare experts,” as one of us previously wrote at War On The Rocks, the Marines “will have an even longer road ahead to redefine, build, train to, and fight with electronic warfare.” And this is not the only community losing critical talent. So, once again, we are joined at the hip with our fellow marine, prioritizing people. However, where we seem to diverge is in integrating ideas and hardware – what Australian Army Maj. Gen. Mick Ryan classifies as the “imperative” of human-machine teaming – and what that implies for the future fight.

 Jamming the Famine

“Feast and Famine” seems to imply that the recent character of warfare has become absolute. But we believe making assertions that domains will be impenetrable — “blinded by cyber and electronic attacks, the task force must guess at the location of anti-access, area denial (A2/AD) weapons” — is ahistorical. Any notion that this might be true should only force us to re-double our efforts to overcome such challenges. For example, if the MAGTF has to fight in spectrum contested environments, then “Guardian Angel” drones, from persistent off-set locations, can facilitate intelligence sharing and command and control, much like the U.S. Missile Defense Agency has demonstrated, cueing other platforms and enabling the reduction of the A2/AD boogeyman.

These same drones, employing technologies already demonstrated or in use, can release swarms of other autonomous drones to hunt down, disorient, and defeat adversary air defense systems. They can do this using in-stride, beam-switching capable beyond line-of-sight or line-of-sight datalinks, just as the U.S. naval force did when employing “kamikaze” drones during the Pacific campaign, 73 years ago. Further, by serving as multi-intelligence collection and aerial communications bridges for advanced waveforms, the “Guardian Angel” drones can enable the naval force to leverage, extend the range of, and translate information between dissimilar users, while gathering information on the adversary.

The same adversary assets that “Feast and Famine” states will blind the naval force will simultaneously be cueing us by emitting a signature ripe for exploitation. The Israelis heavily leveraged unmanned aircraft to exploit vulnerabilities and defeat sophisticated adversary air defense systems in their 1973 and 1982 wars. For future MAGTFs, possessing such capabilities to exploit adversary vulnerabilities will be critical to establishing expeditionary advanced bases. Like a self-healing, nodeless network, the MAGTF “team of teams” must be resilient if it is to establish sea control and command of the littorals. Consider the recent MQ-9 Reaper shot down over Yemen. Besides a few headlines and YouTube videos, the event quickly faded away, while other MQ-9s continued the mission. It is not without consequence that an MQ-9 was shot down, but imagine how a downed unmanned platform changes a commander’s risk calculus in contested environments. If, as the commandant declares, “we’re going to have fight to get to the fight,” why must that first fight be manned? Why would the American people even want it to be manned?

Not only do drones provide abundant capabilities, they do so without risking manned pilots and costly aircraft. If shot down, there is no concern for a similar rescue mission to the stealth fighter shot down in Serbia, in 1999. There is also no concern for extending the risk to those employed on rescue missions, such as the marines that rescued Scott O’Grady, after his F-16 was shot down. Nor would we have to worry about adversaries live-streaming their torture of our pilots on the internet to influence public opinion. The material loss is just that, material: only a blip in the news cycle. The biggest threat is captured and reverse engineering technology, which can be mitigated.

Changing the commander’s risk calculus actually engenders the fight in these contested spaces. As the Israeli military first learned 44 years ago when using Firebee 1241 unmanned aircraft to destroy Egyptian air defense systems, what better “waste” of an adversary surface-to-air missile than on a drone? Similarly, what better “waste” of an advanced integrated air defense system in the future than on a drone swarm? Who’s going to breach the beach — at politically acceptable levels — when no one else can, but unmanned assets?

Breaching with Artificial Intelligence: A Historical Perspective

Maybe our opening vignette is incorrect and our marines, despite the warnings, make it ashore to find highly trained adversary manned-unmanned teaming-enabled infantry units surrounding their objectives. In such an environment, would fighting blind, isolated, and alone, as “Feast and Famine” suggests referencing Guadalcanal, be our only option? Today, is this even a politically acceptable option? During a recent House Armed Services Committee hearing on the subject, two congressmen who served in the Marine Corps challenged the viability of such a contested landing. They raised concerns about political costs, as well as the imbalance between the human and fiscal cost of current amphibious concepts relative to potential adversary capabilities, much like Frank Hoffman has argued previously. Is this approach the best option the Marine Corps can provide to execute Congressionally-mandated missions to seize and defend advanced naval bases?

In the last several weeks, multiple U.S. defense experts have indicated that fighting blind, isolated, and alone is not an acceptable going-in option especially without realistic training. For example, at a recent conference, U.S. Air Force Lt. Gen. Jack Shanahan, the Pentagon’s director of defense intelligence for warfighting support, stated “the Department of Defense should never buy another weapons system for the rest of its natural life without artificial intelligence baked into it.” His declaration came less than 24 hours after Robert Work, the father of the Pentagon’s “third offset” strategy, responded to hearing Google’s Eric Schmidt calling for America to develop a national AI strategy by stating, “the image that popped into my mind was of Nikita Khrushchev banging his shoe in the UN and saying, ‘we will bury you.’”

So, what does this all mean for the Marine Corps’ ability to operate in contested littoral environments? Reflecting on the House Armed Services Committee hearing, frequent senior leader comments on artificial intelligence, as well as the primary options available to President Kennedy during the Cuban Missile Crisis’ 13 days, we offer the graphic below as a possible illustration of policymaker perceived risk considerations should they have to deal with anything like the described worst-case scenario above, given the embrace of artificial intelligence. Options five through seven on the right graph, “Crisis in the Pacific 2020,” are based on the potential of artificial intelligence to fundamentally change how the naval force conducts forcible entry: One consequence could be a significantly lower risk of massive U.S. and allied (human) casualties. However, in option seven, this risk is elevated for the opposite reason: the improved relative capabilities of adversaries’ weapons compared to such capabilities in 1962.

We believe our recommendations in “Open Your Eyes” are directly in-line with the Pentagon’s artificial intelligence goals, despite Defense Department policy prohibiting lethal autonomous weapons. They’re also in-line with the Marine Corps Operating Concept, which directs the service to “refine the concept of manned-unmanned teaming to integrate robotic autonomous systems with manned platforms and Marines” and “exploit man-machine and AI interface to enhance performance.”

What Would an Artificial Intelligence-enabled MAGTF Look Like?

Some focused artificial intelligence-enabled systems will provide decision enhancing human-machine interfaces that offer probability-based threat analysis from distributed sensor networks. Other artificial intelligence-enabled systems will execute on platforms, like the current-generations of the Firebee: going from a preprogrammed decoy to a complimentary “loyal wingman” that is low cost enough to be attritable, but robust enough to be recovered. By simply taking advantage of embedded artificial intelligence, the “wolf pack” shares its extended “vision” back to legacy-manned platforms while extending the abilities of 5th generation fighters. Collaborative swarming attributes, with embedded artificial intelligence, enables the above teaming and capability concepts to search and find targets at machine speed.

Students at the Massachusetts Institute of Technology have an annual competition to battle out the best algorithms for just these types of missions. Developing swarm tactics’ code is also the intent of DARPA’s Offensive Swarm-Enabled Tactics sprints empowering “aircraft carriers in the sky.” The Office of Naval Research’s Low-Cost Unmanned Aerial Vehicle Swarming Technology and the Strategic Capabilities Office’s Perdix swarm provides hardware for this kind of artificial intelligence, which can be coupled with these ideas. The collaborative swarming approach is a simple, and a more affordable way, to increase mass, effectiveness, and survivability of our personnel and their platforms while “fighting to get to the fight.”

Part of the difficulty of attempting an amphibious landing in our ominous scenario is that our MAGTF could be blinded, turning what was a relatively limited political risk into an engagement with expected carnage, the likes of which we have not seen since Tarawa or Guadalcanal. So, as marines, we owe the American people a better option, similar to T.X. Hammes’ “small, many, smart vs. few and exquisite”. Turning back to our chart of potential decisions, AI presents our political leadership with additional options.

These options are based on the threat of employing a swarm of weaponized AI-enabled drones to fight the first fight, to penetrate the defense, and in the case of option 6, to facilitate the follow-on attack. This is an evolution, or a 21st century reinterpretation, of Thomas Schelling’s logic behind the “threat that leaves something to chance.”

As the MAGTF “team of teams” moves towards shared consciousness and empowered execution, our relationship with our hardware becomes one of centralized command and decentralized control. We set the machine’s priorities — target precedence list and information requirements — and preprogram possible actions, but ultimately the machine’s artificial intelligence optimizes as it sees fit. These swarms of loyal and collaborative wingmen “fight the first fight to get to the fight” — executing for the marine on the loop, gearing up for the follow-on, decisive fight.

Inflection Point

When assessing the American way of war, Thomas Mahnken noted, “no nation in recent history has placed greater emphasis on the role of technology in planning and waging war.” Gen. Bob Scales made a similar observation, “we have in the past been generous in our willingness to expend firepower to save manpower.” The decisions to release ordnance from the air, from under the ocean, and to employ atomic weapons — all of which were initially opposed on ethical grounds — illustrate these facts. Understanding applied artificial intelligence and ensuring the ability for software decisions to deliver high confidence precise effects is critical to informing policy. The Marine Corps has an ethical obligation to the American people to be at the forefront of this option.

When it comes to autonomous systems leveraging artificial intelligence, Vladimir Putin made Russia’s mindset clear: Beyond aggressively pursuing and employing all types of unmanned systems, Russia and China are already pursuing weaponized artificial intelligence. To prevent the first “slaughter bots” massacre, America needs a legitimate artificial intelligence deterrence capability, powered by “policy, strategy, economics, and … by calculation of consequence.” The barriers to entry are too low not to have such an increasingly vital national security capability. And as Colin S. Gray has made clear, given Thucydides’ “fear, honor, interest” triplet, ‘arms control’ cannot control arms.

The commandant has repeatedly emphasized that, regardless of the difficulty of a given mission, Marines owe America success. The time to relentlessly attack the challenge and opportunity before us is now. As success or failure relates to the future of unmanned systems, autonomy, and artificial intelligence, the Corps has one option if the former remains the goal: move out rapidly — eyes wide open — to responsibly make manned-unmanned teaming actual throughout our MAGTFs. And lest we be mistaken, we are not de-emphasizing the role of human beings in warfare at all. What we are saying is for the Marine Corps to achieve its Congressionally-mandated national security requirements, particularly in the most worst-case scenarios imaginable, it must immediately embrace manned-unmanned teaming to the maximum extent possible. Future grunts, trained in “virtual gyms,” will understand Maj. Gen. Ryan’s “imperative.” For them, it is only through such fusion – of marines, algorithms, and ammo – that they will be able to achieve a decisive overmatch closing with and killing the enemy during the last 100 yards.

 

Jeff Cummings is a Marine Infantry Officer and currently serves as a Military Faculty Advisor at The Expeditionary Warfare School, Marine Corps University.

Scott Cuomo is a Marine Infantry Officer and MAGTF Planner currently participating in the Commandant of the Marine Corps Strategist Program at Georgetown University.

Olivia A. Garard is an officer in the U.S. Marine Corps. She is an Unmanned Aircraft Commander and Aviation Safety Officer for VMU-1. Additionally, she is an Associate Editor for The Strategy Bridge and a member of the Military Writers Guild. She tweets at @teaandtactics.

Noah Spataro is a Marine Officer with significant time in service dedicated to unmanned aircraft supporting combined arms effects. He most recently served as the Marine Corps’ Unmanned Aircraft Systems Capabilities Integration Officer.

The opinions expressed are these Marines’ alone and do not reflect those of the U.S. Marine Corps, the Department of Defense, or the U.S. Government.

Image: 3d Marine Division/Thor Larson