
Whether opponents realize it or not, weapon autonomy — to include the choice to kill — will win, and in some cases has already won, the drone debate. The false wall in the public’s understanding between “drones” and existing weapons is publicly cracking. Before long, military necessity will take over. In fact, it already has.
Two words from the headlines, “suicide drone,” show the contrived division of drones from existing military technology and the futility of counter-autonomy ethical arguments. Also called “kamikaze drones,” these weapons broadly surfaced in the public eye during Iran’s December 2014 Straits exercise, where they revealed the “innovation” of mass deploying “suicide-ized” Yasir drones.
These drones, however, are only a military innovation to people who have never heard of missiles. The press just slapped a word, “drone,” onto Manual Control Line-of-Sight weapons that reach as far back as Nazi anti-air weapons in 1943 or mass-produced Soviet anti-tank missiles from the 1960’s. In fact, the armed Predator, the public face of drones, was originally conceived to fill a perceived cruise missile capability gap after a 2000 failure to strike Osama Bin Laden. The cycle has come full circle – from cruise missiles, to missiles mounted on drones, to drones becoming the missiles they replaced.
So, what kind of autonomy do we allow missiles to have? The parlance of “fire and forget” is familiar from any number of TV shows and movies. The AGM-88 HARM (anti-radar missile) can be launched from dozens of miles away, seek out, and autonomously destroy an active radar. The Harpoon and new Naval Strike Missile would be used in a similar fashion, seeking targets autonomously in designated areas over the horizon. DOD Directive 3000.9 defines autonomous weapons as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.” It would appear that autonomous weapons are already here — some people just don’t like the new “drone” branding.
Continental Europe faced a similar dilemma in 1096 when Pope Urban II banned the use of a then-disruptive military technology: the crossbow. “Traditional” bows were still permitted, but some traditionalist authorities disliked how crossbows enabled easy employment of such deadly force. Our opposition to drone autonomy is much the same. We have already decided that autonomous systems are ethical — just as medieval Europe permitted normal bows. Then, as now, we have applied a flimsy ethical distinction to an evolution of technology whose military necessity will ultimately prevail. Pope Urban II’s sanction did not slow the proliferation of crossbows in Europe, and drone autonomy is coming whether we like it or not. Just ask South Korea’s Super aEgis 2 autonomous turrets.
What should be heartening to autonomy opponents is that the drone autonomy we fear will often be more ethical than the autonomy we have now. It is not the blind autonomy of “seek and destroy,” but a full round of discernment with any number of sensors before a kill decision is even considered. There will be places and scenarios where autonomous drones are incompatible or their capabilities less reliable, but so it goes for any weapon system. In conventional fights, such as air and blue-water sea battles, autonomous drones are merely advanced missiles with pieces that return for reuse.
In more complicated scenarios with insurgency or electromagnetic denial, humans are forced into close fights for the designation and destruction of targets. Leaders will make hard calls to save warfighter lives or meet objectives — calls that risk the start of conflict (first mover advantage), risk further force casualties, or risk collateral damage. The autonomy, expendability, and standoff distance inherent to autonomous weapons allow leaders to mitigate human risk and extend the time for its evaluation. This time and security allow commanders to be more careful and more prudent in the deployment of force.
One man’s killer robot empowered to destroy is another man’s merciful missile empowered to disengage. We should retain some fear of what autonomy brings; the power of advanced weapons systems should be respected. However, as the military presses technology’s logical development and development strategies like the third offset, we cannot allow war’s terribleness to make us blind to the future. Live by the sword, die by the gun; autonomy needs to be embraced, whether we like it or not.
Matthew Hipple is a U.S. Navy Surface Warfare Officer. A Graduate from Georgetown University’s School of Foreign Service, he is Director of Online Content for the Center for International Maritime Security — where he hosts the Sea Control Podcast. The venn diagram sections of “his opinions” and “official representation of the U.S. Navy, Department of Defense, or Government,” do not intersect. Follow him on twitter: @AmericaHipple


Killing is easy. It is especially easy when you don’t have to do it yourself. Buy a drone. Future warfare becomes warfare of the passive voice. Enemies were killed… Mistakes were made…
The HARM guides on a signal – a signal programmed by an operator. If it senses the signal, it goes to it. It doesn’t decide anything. It doesn’t know which signal is a higher threat to the strike package based on position and timing. It can’t discern whether the signal it is guiding on is a enemy or friend. It can’t tell whether its target is parked next to a baby milk factory. It doesn’t get happy. It doesn’t get sad. It just runs programs.
The Harpoon is a bit more of a berserker. Like an evil hatchling, it opens its eyes to the world, imprints on the first thing it sees, goes to it – and then blows it up. End of line. Charming characteristic if you assume that everything transiting the GIUK Gap is a Soviet naval vessel. Less so if precision and discrimination are required.
U.S. Navy surface ship captains regard it as standard procedure to double the bridge watch during sea and anchor detail. Twelve or more Sailors on the bridge, perhaps the captain and navigator, as well, to perform the administrative task of not hitting the Chesapeake Bay Bridge Tunnel, traveling at headway speed through a well charted and marked channel, to a concrete pier that hasn’t moved in 35 years. All done in friendly waters, with no communication or GPS degradation. Seasoned SWOs will tell you about all manners of dangers – current and tides, winds and shoals, boats and those dammed jet skis! Careers end when a ship runs aground, and thus the captain gives the task its due attention.
Is it too much to ask that we give our military task of killing people and breaking their things at least as much human attention as we do our administrative task of getting to our parking spot?
You have some answers here – but I would say they’re to the wrong questions.
First, swords were harder to use than modern day guns. Blasting your way up a fortified hill is harder than dropping a bomb or launching a missile. Manned AA would be harder to use today than AEGIS. Just because something is “harder” does not mean it is the right way to do things. We don’t do these things because they are easy or hard, we do them because we are trying to deter/fight/win wars.
Second, I think your point on Harpoon and HARM actually illustrates why autonomy as we imagine it is “superior” as related to the fears of some. The drones people fear are ones we would enable with better ability to discern their targets than the “dumb” automated weapons of today. Now, there are far more aspects to this argument, especially when considering what environments those drones would be deployed in – but we’ll leave that to someone else.
Now, for your final point on oversight – this makes a broad assumption on the value of how we do business on the bridge. How many ships have run aground because a triple-stacked watch team still misses a nav aid or a turn? There are plenty of channel buoys that exist because someone with 20 people on the bridge still ran aground – tragedy of the commons when everyone thinks everyone else has oversight. On the other hand, commercial vessels go to pier with a handful of sailors on the bridge. Merely that we choose to conduct S&A in such a way does not independently validate that way.
That said, I don’t think it’s fair to compare S&A to what we need from drones in combat – so really I’d argue that particular “case study” doesn’t prove either you or I right… It merely demonstrates that I am not necessarily wrong.
Please explain to me how we are to defeat armed forces several times larger than our own without weapons systems capable of independent targeting. In fact, you can skip that argument entirely if you can explain to me how giving an enemy soldier attention in the last moments of his life adds to his worth as a human being.
The U.S. can have weapons systems capable of autonomous targeting, or it can return to large standing armies and high casualty wars. It is one or the other. I am happy to endorse whatever the American public chooses, and I am fairly confident which one they will pick.
One point of clarification: Urban II, and subsequently the 2nd Lateran council, anathematized ALL archers and slingers. It had nothing to do with the ease of using such weapons (since both self bows and slings take significant training for serious effectiveness) but rather that missile weapons do not allow for mercy or the practice of ransom- you cannot surrender to an arrow. This is also tied up in the Peace and Truce of God movements and their unravelling, of course.
Really, the interest in discretion and proprotionality is not too far off from modern day concerns with autonomous weapons.
Hrm – what I was reading said Innocent II was the one to extend the ban to all archers and that Urban II’s case was solely against crossbows.
And crossbows were easier to use. Set, aim, use. There was quite a bit of finesse in both handling and operation necessary to use a standard bow well.
How you managed to write this entire article without quoting the Kyle Reese line from Terminator is very impressive.
“Listen, and understand. That terminator is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.”
What would be the most humane weapon? So far, drones are among the most humane, short of sending ground troops, since drones can hang around and observe longer than, say, a fighter jet.
But troops are more humane than air strikes. The SEALs managed to kill Bin Laden without hurting his children, whereas an air strike would’ve killed the whole family.
But it’s risky and difficult to send troops any time a terrorist needs to be taken out, especially after promising no boots on the ground, so we resort to air strikes. This wouldn’t be a problem if we could send ground robots instead. They would be the most humane option. They could even use nonlethal weapons to take the terrorist alive. Robots wouldn’t object to using tasers or tranquilizer darts against an armed terrorist.
Many discussions of autonomous kill as some sort of new capability are treating an old subject as new data.
A tiger trap for men is an autonomous kill device.
A punji trap, unattended disable capability.
Any number of land mines and anti-personnel devices still in the inventory.
IEDS were frequently self-triggering.
Self-activating, sleeping weapons however they are triggered, that lay on the sea-floor.
None of this is new.
As for the benefits, too numerous to name.
Take a simple device, a light machine gun with motion sensing ala’ the tunnels in Aliens.
Would have been invaluable in any massed wave attack, Korea, Khe Sanh…
Equally useful in stretching your troops around the perimeter, more guys sleep one more night. That much better the next day.
The question about autonomy that needs to be addressed is how can we make them more discriminating. What sensors, what logic needs to go into them so they provide an equally effective threat with a low error rate?