Targeting the Islamic State, Or Why the Military Should Invest in Artificial Intelligence
On Halloween, 2016, aircraft from seven countries gathered in the skies over Iraq’s western desert to conduct a coordinated strike. Intelligence reporting indicated that the Islamic State in Iraq and the Levant (ISIL) was using a remote complex of bunkers that was a former Iraqi army depot to manufacture and store weapons. The sortie included British Tornados and four French Rafales, which launched air-to-surface Storm Shadow cruise missiles on 14 bunkers. The other aircraft released their bombs over the remaining bunkers and buildings.
The strike culminated several months’ work by a small Marine task force deployed to Al Asad Airbase and supported elements of the Iraqi army, ready to begin their advance to the Syrian border after regaining the Euphrates River Valley from Fallujah to Hadithah. At the time, I was the task force’s targeting officer. We gathered, assessed, and compiled the intelligence supporting the target and ushered it through the approval and strike coordination processes.
This strike was unique in its size and the time required to develop it, but the intelligence and targeting fundamentals we employed were the same for all 60 deliberate target strikes I conducted during our 10-month deployment. Reflecting on that targeting work flow illustrates the value of pairing artificial intelligence (AI) with existing hardware and sensor capabilities. AI is a force-multiplier. It increases operational tempo and decreases uncertainty in combat environments, where doing more with fewer resources is an enduring expectation, speed is paramount, and the fog of war persists.
Impacts on a portion of the bunkers (photo credit: Ministère de la Défense)
The several months we spent developing the bunker target were, largely, a matter of hide-and-seek. ISIL was storing military resources in a facility that covered an area of more than 45 square kilometers — the same footprint as 13 Central Parks — with more than a hundred large, fortified bunkers. For the target to be viable, I needed to identify the specific bunkers ISIL used. We leveraged a variety of intelligence disciplines and technologies for this end, but the most labor-intensive of them was full-motion video captured by our drones.
To collect this footage, a pilot flew a drone in circles, staring at a tiny portion of this massive facility, with an intelligence analyst sitting next to him, waiting to document activity that appeared in the field of view. I received daily summaries of the activity, if any, that the analyst identified and watched the footage at the noted times to pull relevant still shots of that activity into the target package. If this process sounds tedious, it’s because it was. Being a small task force, humans and their time were limited. I regret to say that I devoted much of my time to watching hours of drone footage, which at least two other people had already watched, and then copy-pasting screenshots of that activity into PowerPoint.
This routine of collecting full-motion video occurred, on a smaller scale, for all the deliberate targets that we struck and likely just as many additional compounds of interest that we developed but did not strike for various reasons. If we had a capability like Project Maven to aid in parsing the innumerable hours of collected footage, our capacity with the same number of personnel would have been much greater. Better yet, if we had autonomous drones programmed to search specified areas and identify activity by fusing several sensor inputs, and if we had had the ability to process the information at the edge with Project Maven, it would be difficult to overstate the increase in the amount activity that we could have collected and analyzed.
Moreover, this increase isn’t isolated to the ability to process a greater quantity of information, but also the capacity for greater quality via a proper division of labor. The military trains its people to perform specific occupational specialties because it understands that an organization of jacks-of-all-trades is one where everyone does everything poorly.
The military should expand this principle to include technology. Computers excel at processing large amounts of data. Humans excel at using the data output to make meaning and moral decisions. There is an opportunity through AI in the form of programs like Project Maven to establish the right relationship between what computers do well and what humans do well.
Aside from improving quantity and quality, incorporating AI into existing capabilities has the potential to increase operational tempo. If we had the assistance of autonomous drones whose sensor inputs AI analyzed to identify activity for a human to review, I am confident that we would have developed a complete and accurate target package for the bunker complex in weeks rather than months.
There are well-defined standards and criteria for what constitutes a valid military target. The standards are fixed, so to increase the targeting tempo, one must collect the required information faster to satisfy the standards more quickly. What is more, after collecting the information, someone must synthesize the raw information into usable intelligence or a realistic operating picture. Currently, both information collection and processing are manual, labor-intensive endeavors. AI can relieve human operators of much of that burden, performing the same tasks better and faster.
Some worry that AI will replace accountable human actors in the use of lethal force. Without checks and systems, that is a valid concern. With checks and systems, AI can aid accountable commanders who are confronted with hard moral decisions.
I encountered one such moral decision during a strike on another target. This target facility was a large adobe house with a shed on the east bank of the Euphrates River. It was on the corner of two dirt roads, one leading about 100 meters to a dock on the river and the other running in front of the compound and into a small town. Extensive reporting indicated that ISIL was using the facility to stage weapons, ammunition, and other resources before ferrying it across the river to attack the Iraqis and us.
While developing the target, we had seen groups of up to 12 men, some armed, at the compound. I nominated the target, and higher headquarters approved it with the caveat that we observe the target prior to the strike because we had also seen civilians walking on the road in front of the compound, though they never interacted with the facility. Prior to aircraft releasing their ordnance, we wanted to ensure there were no civilians on that road who could be hurt by the blast.
On the day of the strike, I was watching in our combat operations center as we positioned a drone offset to the east of the facility, the camera fixed on the target compound and the road in front of it. Two aircraft arrived on station, each with a 500-pound bomb — one for the house and one for the shed. The aircraft were about to enter their final approach when four men walked into the field of view from the road that led to the river and entered the house.
Immediately, a phone rang. It was the strike liaison, who was watching our drone feed remotely and relaying instructions to the pilots. He asked for an assessment of what we observed: did we see them carrying any weapons? Was the target still valid? The men were in the field of view for only a few seconds, and the feed’s low fidelity complicated our ability to see if they were carrying anything. I told the strike liaison that I couldn’t determine if the men were carrying weapons, but given the body of intelligence reporting, I was confident that we had positively identified that ISIL controlled the facility and was using it for military purposes. The activity we saw was consistent with the pattern of life observed during the target’s development, so I was confident in assessing that the four men were ISIL-affiliated and that the target was still valid.
I heard the strike liaison conferring with others in the background before he returned to the line, saying that the strike would continue. A few minutes later, the two aircraft flew to the target and released their weapons. The simultaneous impact of both 500-pound bombs created a dense cloud of dust over the target area. When it cleared, we saw that the house and the shed were destroyed.
Am I sure, without a doubt, that those men were ISIL fighters? No, I’m not. However, I stand by the assessment that we had accurately characterized the facility’s use for ISIL’s military purposes and demonstrated an operational necessity to destroy that capability. It was an ambiguous situation that didn’t allow for time to consider. But that ambiguity is also the reason I was there, in the room, watching the strike. It was my job, not a computer’s, to handle the ambivalence and decide what to do.
While I will never argue that a computer should have been responsible for making that decision, I wonder what tools could have helped assure me that I did the right thing. An autonomous drone, able to fuse several onboard sensors to find and assess activity within a specified search pattern, would have helped. A drone capable of autonomous multi-sensor fusion, the input from which a Project Maven-like program processes at the edge, might have provided valuable context.
Perhaps such a drone would have acquired those four men as they crossed the river in a boat and seen one sling a weapon under his clothes while disembarking to shore. Or maybe it finds them at the dock, rolling up fishing nets and cleaning the day’s catch. Context matters, but to provide the depth of context I described without drone and sensor platforms operated by AI would require a platoon of people manually piloting several drones and processing their collection. Most commands don’t have that amount of hardware and personnel resources or can’t afford to commit that level of resources without degrading other warfighting functions. Thus, commands accept a level of risk that AI could reduce if it were incorporated into their organic and supporting assets.
Sounds great, one might say, but AI is not ready yet. That’s a fair criticism, but the technology won’t mature unless it receives the type of investment that will allow it to mature. Investment in AI platforms — from autonomously piloted drones, to sensor fusion across the electro-magnetic spectrum, to intelligence, surveillance, and reconnaissance collection processing and analysis — will encourage the technological developments and algorithmic learning that must occur. The military will also need to answer hard questions regarding what the operational relationship should be between people and machines. Answers to those questions will likely not be clear until the military incorporates and fields AI in a meaningful way.
AI incorporation is worth the discomfort that accompanies all paradigm shifts, not to mention the mental work and joint force coordination needed to write new doctrine and procedures. I don’t toss around the term “paradigm shift” lightly. AI has paradigm-shifting potential to be a force-multiplier, provide better information to commanders, and quicken the operational tempo. In other words, it will provide more better outcomes faster, a recipe for success in combat. AI will help give warfighters the context to make decisions that they won’t wonder about later, and it will allow the military to increase its competitive advantage against both near-peer and non-state adversaries.
Hans Vreeland is a former Marine artillery officer. He honorably separated from the Marine Corps in 2017 and is currently an engineer at Anduril Industries. His opinions are his own.
Image: U.S. Air Force photo