Iran’s Bet on Autonomous Weapons
Editor’s Note: Some links in this article lead to media sites and journals that are affiliated with the Iranian military. If access to such sites is prohibited by your employer’s policy, please do not click links in this article from a work computer.
Attempting to pass off a child’s astronaut costume as an innovation of Iran’s space agency was bizarre but familiar. Shared online by Minister of Information and Communications Technology Azari Jahromi in February 2020, the fit-for-Halloween suit exemplified the Iranian government’s penchant for fabricating technical achievements, whether through farcical stealth fighter jets, fake space monkeys, or oil drum surface-to-air missiles.
Iran’s history of shameless exaggeration leaves plenty of reason for skepticism when the nation’s military unveils new capabilities or when Iranian officials announce ambitious technological goals, as they did earlier this year. In a widely publicized January exercise, the Iranian Army Ground Forces showcased what they said was the country’s first autonomous suicide drones, reportedly capable of detecting and destroying targets “using advanced image processing capabilities and artificial intelligence.” Not to be outdone, the Islamic Revolutionary Guard Corps followed up with a demonstration of an explosive suicide drone, purportedly piloted with some level of AI. Later that month, Brig. Gen. Mohammad Hassan Nami — one of Jahromi’s predecessors as minister of information and communications technology — claimed that Iran would have fully autonomous systems on the battlefield by 2024.
Notwithstanding the well-founded reasons to avoid taking such claims at face-value, Iran’s pursuit of autonomous weapons is no fanciful moonshot. Buoyed by technically educated Iranians and unencumbered by the rigors of thorough weapons testing, Iranian forces have the resources necessary to follow through on their autonomous aspirations. Iranian military literature suggests that the country’s army, air force, and the revolutionary guard corps seek an early adopter advantage by deploying AI-guided systems to the battlefield as soon as viable, however rudimentary and unreliable they may be. If that happens, it could be particularly destabilizing given Iran’s proxy operations strategy, with autonomous systems potentially empowering Iranian-backed militants to conduct faster and deadlier attacks at greater range.
AI as Iran’s Force Multiplier
The Iranian military’s interest in AI and autonomous systems is best understood in the context of its long pursuit of force-multiplying, asymmetric capabilities. Similar to constructing a network of loyal proxy militias and terrorist organizations across the region, employing a small army of the nation’s many educated computer engineers is feasible and scalable. A 2016 World Economic Forum study reported that Iranian universities graduated some 335,000 science, technology, engineering, and mathematics students annually — ranking the country the fifth highest in the world based on that metric. Yet, Iran’s Ministry of Science, Research, and Technology found that 41 percent of the nation’s computer science Ph.D. graduates were unemployed in 2018. While modern hardware and processors remain difficult to acquire, the nation’s public and private sectors have put some of this skilled labor to work in developing rudimentary AI tools for various purposes, from space-based agricultural monitoring to seminary research in Islamic sciences. Determined to compensate for the nation’s material weakness, Iranian military thinkers see immense value in employing this abundance of talent to integrate AI into the nation’s drone fleet, air defense network, and command systems.
Iranian military officials envisage AI tools as helping them to overcome persistent challenges, such as undertaking aerial navigation and precision targeting without domestic global positioning system infrastructure. AI could allow drones to fly on autopilot to predetermined locations and then conduct targeting based on image recognition technology. Networks of autonomous reconnaissance drones, armed with Iran’s improving compact radar and imaging sensors, could also feed volumes of surveillance data into a centralized, intelligent data processor, improving broader situational awareness. For example, a 2019 article in the Journal of Military Science and Technology, which is affiliated with Iran’s army, explored possible applications of the “Internet of Things” to the Iranian air force and urged the development of integrated data processing platforms to help guide everything from high-level military decisions to fleet management.
In April, the Research and Self-Sufficiency Jihad Organization of the Army Ground Force — which is responsible for producing and mainstreaming innovations — presented a stationary model of an autonomous drone swarm featuring one large drone and a cadre of smaller suicide drones at a technology exhibition. Tasnim News, an outlet with links to the revolutionary guard, reported that the new drones can operate either with ground control or using AI based on predetermined information stored in the “mother” drone. The self-sufficiency organization’s display signals an intent to produce and deploy such autonomous suicide drones and to use them in swarms, although a static hardware exhibition does not prove operability of the necessary software. Regardless, Tasnim described the new system as “perhaps the edge of unmanned control and operation technology in the world.”
The Iranian army’s handiwork has been more conspicuous than that of the revolutionary guard, which remains tight-lipped about its internal development efforts and stands to gain the most from the rise of Iranian autonomous systems. Enjoying a vast budget, political dominance, and control over much of the nation’s industrial base, the guard corps not only inherits the army’s innovations but can also independently develop capabilities for their more advanced drone hardware. And the Quds Force, the revolutionary guard’s external operations branch, will ultimately deploy and distribute new capabilities to Iran’s proxy forces across the region.
On the sidelines of the Great Prophet 15 exercise in January, the commander of the guard corps’ Aerospace Force, Amir Hajizadeh, told media that the combination of new missile capabilities, drone operations, and AI technology has “born new possibilities and power into the IRGC.” Hajizadeh’s statement suggests that the Aerospace Force harbors broader multi-platform ambitions for militarized AI. For example, autonomous drones could collect, interpret, and relay data, in real-time, to aid the delivery of precision-targeted missiles.
While drones are the area in which Iran’s autonomous efforts are most advanced, they may not be the only benefactors of the nation’s investment in AI. Iranian officials claim that the Mobin, a cruise missile first displayed in 2019 at an air show in Russia, uses digital scene matching area correlation guidance, a type of AI developed in the 1980s capable of helping cruise missiles autonomously navigate to a target. Iran is also signaling ambitions for armed, remote-controlled ground robots, apparently with plans to link them in an autonomous network. If Iran can translate these ambitions into reality, armed surface robots could integrate with aerial intelligence to help sweep battlefields or patrol urban areas. And the potential applications don’t end on Earth’s surface. Last May, Iran revealed its latest foray into underwater vehicles: an unmanned midget submarine. Ramshackle as the submarine may appear, an intelligent command system could allow it to operate in networked packs, possibly lying in wait for adversaries beyond communications range. Nothing in the publicly available Iranian literature suggests that the country’s military has started exploring such underwater autonomy or possesses the necessary sensing equipment, but technical carryover from drones could speed progress. If Iran developed and deployed autonomous underwater loitering capabilities, even if they came with severe technical limitations, it could have major implications for maritime security in the Persian Gulf and beyond.
Detailed Iranian military science articles also suggest that the country’s technologists are contemplating AI-enhanced air defense systems capable of taking action without human input. As Iran brings online new indigenous radar and missile systems — including their improved copy of the S-300 — the army’s Khatam al-Anbia Joint Air Defense Base is emerging as a powerful hub of national command and control in a previously disaggregated system. The base has increased cooperation with the revolutionary guard corps’ separate air defense systems, including by hosting recent joint command exercises. This centralization will allow Iranian forces to more easily integrate AI tools into their command systems. If successful, Iran could use decision-making algorithms to augment its defense capabilities, possibly avoiding catastrophic human errors of the type that led the Aerospace Force to mistakenly shoot down a Ukrainian passenger plane in January 2020. However, Iran would need to develop sophisticated systems that can negate human mistakes to avoid repeating such deadly negligence across its own military networks and those of its proxy forces.
The final frontier of Iran’s AI goals lies in strategic integration across platforms with a central command and control system to speed decision-making. In a 2018 journal article, an author at the armed forces’ elite University and Institute of National Defense and Strategic Research extoled the virtues of centralized, intelligent battlefield management, especially the way it would enable the processing of intelligence from hundreds of sources and the command of weapon systems in real-time across numerous commands. Iran is far from achieving such synergistic integration — its divided military forces create structural barriers to doing so — but it is investing in the necessary computing capabilities to solve these problems.
Learning from Others
In seeking guidance for their work on autonomous capabilities, Iranian media and strategic thinkers have looked to U.S. discourse surrounding these systems. Iranian popular media has translated and reported on remarks by U.S. officials calling AI the key to future military superiority and has closely followed AI-related efforts at America’s Defense Advanced Research Projects Agency. Media outlets in Iran noted that the final report of the U.S. National Security Commission on Artificial Intelligence endorsed strategic investment in AI to compete with China. Ignoring the report’s accompanying discussion of escalatory and ethical risks, Iranian thinkers have evidently taken away two messages: The rapid development of AI-enabled tools is critical to future competition, and efforts to limit this pursuit should be resisted.
Iranian thinkers are also learning through careful observation of other drone powers. In an article titled “Artificial Intelligence in the Armenia-Azerbaijan Air and Missile War,” the editorial board of the Iranian Journal of International Relations attributed Azerbaijan’s gains in the 2020 Nagorno-Karabakh war to its skilled use of coordinated drone warfare. While it is not clear that Azerbaijan actually employed AI in its operations — it is possible given the autonomous offerings of the Turkish arms firm STM — the authors argued that the conflict showed how “the synchronization of new weapons makes the modern battlefield deadlier.” They also emphasized that Azerbaijan’s military faced little opposition to its successful use of coordinated drone strikes.
A concerning lack of domestic debate in Iran suggests that legal and ethical concerns about autonomous systems, if they exist, will not challenge the military’s ambitions. A 2017 article is the only one to address lethal autonomous weapons and international humanitarian law in Iran’s International Law Journal, the country’s preeminent journal on international legal issues. Likewise, the only relevant article in the Iranian journal Islamic Law examines civil liability for AI-enabled autonomous robots yet entirely ignores autonomous machines designed to cause harm. Meanwhile, Iran’s popular media and civil society organizations have produced limited coverage and discussion about the potential implications of developing AI-enabled military capabilities. One rare exception was a 2018 Fars News translation of Michael Klare’s essay “Alexa, Launch Our Nukes!” that emphasized the dangers of creating nuclear command and control systems devoid of human input.
The Potential for Destabilizing Effects
The United States, Russia, China, and even Turkey far exceed Iran’s capacity for the development of AI systems. But the Iranian military’s pursuit of such technology is especially dangerous because of its propensity for rapidly deploying novel technologies, often through unpredictable proxies. In the hands of non-state groups, even simplistic autonomous weapons systems will enable unprecedented aerial operations and could carry risks of dangerous malfunction.
Some experts have called for classifying swarms of autonomous, armed drones — such as the Foji swarming drones that the Iranian army claims it successfully used during the January exercise — as weapons of mass destruction. Through decentralized and adaptable command, swarms of such drones could overwhelm air defense systems in order to deliver explosives or rocket fire, all with rapid speed and adaptability. In 2018, rebel forces used 13 highly rudimentary and reportedly pre-programmed drones operating in a swarm to damage two Russian military bases in Syria. Those explosive-laden drones lacked advanced sensors or AI, but the incident was a harbinger of the type of destructive power that could be delivered by autonomous drones in the future.
Armed with such capabilities, Iranian proxy forces may find themselves benefiting from remarkable improvements in the speed, precision, and, critically, range of lethal aerial operations. While militant groups have not yet used conventional drone arsenals for strategic bombing, it’s possible that autonomous systems may embolden them and allow militants to pursue more destructive objectives. Not only could these advantages make the sort of militant attacks plaguing U.S. forces in Iraq deadlier, but they could also facilitate coordinated strikes against targets far beyond their territories of control.
The 2019 attack on Saudi Aramco facilities serves as a potential warning of what may lie ahead. While the Iran-backed Houthis in Yemen claimed credit for that drone and cruise missile strike, U.S. intelligence suggests that the drones — composed of Iranian components — came from the north, not the south. Whether Iran itself or its proxies in Iraq launched those drones remains uncertain. But the event was suggestive of future potential dangers: With autonomous weapons in hand, Iran-backed militants could work in concert with other groups or Iranian forces themselves to orchestrate multi-system strikes, emanating from several locales, on distant, high-value targets.
Autonomous reconnaissance drones provided to an Iranian proxy group could also give Quds Force commanders access to unprecedented intelligence and speed their decision-making in relation to regional operations. No longer tied to commanding or operating drones locally, the Quds Force could program proxy-launched autonomous systems from the comfort of Iran, reducing their ground presence on foreign battlefields. Of course, an Iranian-built drone would be exposed as such by its Iranian components in the event it was downed, but a proxy group could still claim responsibility for any attacks. Proving otherwise and attributing a drone’s code to its Iranian authors will be extremely difficult.
Adapting to a Dangerous Future
American officials, and their counterparts in allied and partner nations, should closely monitor Iranian progress in relation to autonomous weapons and begin to craft a response. In the near term, Iran’s targets should invest in technologies to jam and disrupt emerging aerial systems while carefully considering the risks that these tools may provoke unpredictable malfunctions in Iranian systems. Recent reports of looming U.S. sanctions on Iran’s drone component procurement networks may slow the country’s advances. But such steps won’t overcome the Iranian military’s intense desire for autonomous weapons. Iran’s targets will ultimately need to adapt to a future of potentially rapid escalation enabled by networked, intelligent weapons.
In formulating a strategy to deter attacks conducted using Iranian-built autonomous weapons, the United States and others will find themselves grappling with complex considerations of escalatory risk and holding Iranian-backed proxies accountable. Responding to small-scale attacks on U.S. bases in Iraq is only the beginning of this challenge. Israel’s recent shootdown of a single armed Iranian drone at the Jordanian border was a small step toward denying Iran the ability to wield its burgeoning airpower, although one that failed to deter Iran’s deadly suicide drone attack on an Israeli-operated tanker off the coast of Oman. But the big policy challenge that lies ahead is how to deter Iran and its proxies from using autonomous systems to launch more destructive attacks on high-value targets. Among the means to do so, American officials should consider ways to credibly threaten the imposition of clear and decisive costs on not only Iran-backed proxies that launch any such attacks, but also on the Iranian forces supplying the weapons and programming the targets. Most critically, Iran’s potential targets should not risk emboldening Iranian military officials by allowing them to develop a sense of AI-enabled impunity.
Evan Omeed Lisman is a research associate at the Center for Global Security Research at Lawrence Livermore National Laboratory, where he uses Persian-language materials to assess military technology programs in Iran, and a J.D. candidate at Yale Law School. He holds a B.A. in Middle Eastern studies from the University of California, Berkeley. The views and opinions of the author expressed herein do not necessarily state or reflect those of the U.S. government or Lawrence Livermore National Security, Inc.