A French Opinion on the Ethics of Autonomous Weapons

5294474 (1)

How will the world’s most powerful democracies deal with the ethical and legal dilemmas posed by the development of so-called “killer robots”, or lethal autonomous weapons systems (LAWS)? On the one hand, LAWS promise unparalleled operational advantages, like acting as a force multiplier, expanding the battlefield, and removing humans from dull, dirty, or dangerous missions. Authoritarian powers like China and Russia appear to be dedicating tremendous resources into pursuing these capabilities. On the other hand, giving autonomous weapons the authority to determine who lives or dies is an ethical, practical, and legal nightmare. A couple of states have well-documented policies, most notably the United States and the United Kingdom.

This article focuses on France, which has begun the difficult work of thinking through the ethical problems associated with lethal autonomous weapons systems. I’m a member of France’s Defense Ethics Committee, which reports directly to the country’s defense minister. Last year, the committee submitted an opinion on the “enhanced soldier,” which drew a red line between acceptable, non-invasive practices, and unacceptable ones such as genetic engineering.

In April, the French defense ministry published another Defense Ethics Committee opinion on ”the integration of autonomy into lethal weapon systems.” We argue that LAWS should be understood as fully autonomous weapons, which are ethically unacceptable for a number of reasons, but that partially autonomous lethal weapon systems (PALWS), which present both potential benefits and risks, could be ethically acceptable under certain conditions.

Why does it matter? Not only because, to the knowledge of this author, no other major military power has such an ethics committee playing such a role within their Ministry of Defense, and that itself tells something about the French ethos. The Defense Ethics Committee’s opinion on autonomous weapons is likely to be scrutinized for a number of other reasons: it was France that, in 2013, initiated the multilateral debate on autonomous weapons; it has been an active participant in the debate since then (see for example the 2017 French-German proposal); and also because France will chair the next Convention on Certain Conventional Weapons Review Conference in December 2021.

LAWS Versus PALWS

The committee’s first — and arguably most important initial task — was to define key terms. It decided to define LAWS as:

A lethal weapon system programmed to be capable of changing its own rules of operation particularly as regards target engagement, beyond a determined framework of use, and capable of computing decisions to perform actions without any assessment of the situation by human military command.

The most important aspect of this definition is its narrowness: in line with France’s position at U.N. meetings, LAWS are considered as fully autonomous systems. Defining LAWS has always been a challenge, because — if understood as fully autonomous — we are talking about weapons that do not yet exist. As a result, there is no shared experience or understanding with these weapons. In the history of arms control, that makes them quite unique. In multilateral fora (e.g., the United Nations), some states have used the challenge of defining LAWS as an excuse to obstruct or redirect debate. By limiting LAWS to fully autonomous weapons, France defends a restrictive approach avoiding those weapons being confused with remotely operated or supervised weapons systems, which always involve a human operator.

The committee then introduced the category of partially autonomous lethal weapon systems (PALWS). It decided to define it as:

[I]ntegrating automation and software: [1] to which, after assessing the situation and under their responsibility, the military command can assign the computation and execution of tasks related to critical functions such as identification, classification, interception and engagement within time and space limits and under conditions; [2] which include technical safeguards or intrinsic characteristics to prevent failures misuse and relinquishment by the command of two vital duties, namely situation assessment and reporting.

PALWS are an in-between category, distinct from two others. On the one hand, PALWS are not LAWS because they cannot change their own rules of operation, they “cannot take lethal initiatives.” On the other, PALWS are not automated weapon systems either. The difference between autonomy and automation is foundational. Once deployed, both autonomous and automated weapons can function without human involvement. However, while automation refers to the performance of a limited number of repetitive and pre-determined tasks (the system always reacts the same way to the same stimulus), autonomy involves an ability to learn and adjust in a changing environment. For example, mines and some air defense systems are automated in that they act in a reactive and repetitive way, by detonating or firing, when their sensors detect an object. They do not learn or adapt, and they do not need to because they do not have to face unexpected situations. Their environment does not change. PALWS are not LAWS in that they are not fully autonomous. However, because they are still (partially) autonomous, they are not automated weapons either. Several existing weapons could be categorized as PALWS, among which loitering munitions such as the Israeli IAI Harop, the Turkish STM Kargu-2, and a non-identified Chinese model used in swarms; the American Collaborative Small Diameter Bombs (CSDB), or the drone warship Sea Hunter.

Now, two objections could be raised at this point. First, that such a distinction between LAWS and PALWS is certainly not new in the international debate, nor in national doctrines. Indeed, as early as in 2012, when the United States was the first country to establish guidelines for the development and use of autonomy in weapon systems, they had already distinguished between an autonomous weapon system and a semi-autonomous one. If the PALWS category is a terminological innovation in French — especially in a ministerial document — “partially” and “semi” autonomous weapon systems, while being defined differently, do refer to the same challenge of describing what lies under the threshold of full autonomy.

Second, by adopting a narrow definition of LAWS limited to fully autonomous weapons, isn’t France defining them as something nobody ever wanted? Under the appearance of rejecting LAWS, a category of systems that haven’t really been under consideration, isn’t France actually legitimizing the more realistic category of PALWS? This is a legitimate concern. However, as a member of the committee that drafted the opinion, the intention was not to legitimize whatever category of autonomous weapons it may be. Instead, the goal was to add some needed intellectual rigor. The problem with the “LAWS” terminology is that “autonomous” is presumed to be dichotomic: a system is, or is not, autonomous. And if it is autonomous, it is presumed to be fully autonomous — which, for good reasons, no one really wants. Therefore, it is more useful to adopt an alternative terminology based on the idea that the integration of autonomy in weapon systems can and will be gradual. Rejecting LAWS and focusing on PALWS for that reason does not mean that PALWS cannot be ethically problematic. It is not legitimizing them. Rather, their legitimacy depends on a number of criteria. The distinction offered by the committee simply reorients the discussion to center on the relevant category.

LAWS Are Not Acceptable

France has publicly renounced the use of fully autonomous lethal weapons, for both ethical and operational reasons, since 2013. In 2018, President Emmanuel Macron said he was “categorically opposed” to LAWS, to the extent they would “abolish all accountability.” He added, “the decision to give the green light has to be made by a human being because you need someone to take responsibility for it.” In May 2021, French Defense Minister Florence Parly confirmed that “France says and will always say no to killer robots. France refuses to entrust the decision of life or death to a machine that would act in a fully autonomous way and would escape any human control.”

France’s position on LAWS is in line with its closest allies. The U.S. Department of Defense’s 2012 directive explicitly stated that their weapon systems should “allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” This is, of course, another way to say they should not be fully autonomous. Similarly, the United Kingdom repeatedly expressed that it “is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control.” Many other states made similar remarks. This is indeed one of the few points of consensus in the U.N. debate on LAWS: in one way or another, everyone insists on retaining human control. No one wants a fully autonomous weapon, as full autonomy, literally the ability to set one’s own rules, would mean unpredictability, which would make such systems “militarily useless.”

But this only begs a more difficult question: Should countries preventatively ban LAWS? This is where the disagreement lies.

The French Defense Ethics Committee also rejected incorporating LAWS into the country’s military for a number of reasons. LAWS would:

[B]reak the chain of command; run counter to the constitutional principle of having liberty of action to dispose of the armed forces; not provide any assurance as to compliance with the principles of international humanitarian law; be contrary to our military ethics and the fundamental commitments made by French soldiers, i.e. honour, dignity, controlled use of force and humanity.

The committee considered it “legitimate and vital to continue research in the area of autonomy in lethal weapons,” a research focused “on ways and means of enabling French forces to counter the use of LAWS by states or other enemies, but without using LAWS ourselves.”

PALWS Are Interesting and Risky

PALWS offer a number of advantages in terms of performance, precision, pertinence, protection, and permanence (the “5 Ps”). In terms of performance, they will provide means to gain speed, in particular by shortening the observe-orient-decide-act loop. Also, one of the greatest challenges of the future of warfare will be defense against incoming conventional or nuclear strikes at hypersonic speed (of at least Mach 5, with some of them reportedly reaching Mach 20), leaving very little time to react and therefore requiring a greater autonomization. The same is true regarding defense against a saturation attack, or swarming. PALWS will also be useful to monitor very vast areas in all environments (land, air, sea, cyber, space) that cannot be covered without a certain degree of autonomy.

PALWS will also help to deal with the increasing mass of information (“data deluge”) that confront command centers and individual soldiers. Autonomous systems can help decision-making on an increasingly interconnected battlefield. They will also help penetrate highly defended areas physically and virtually; improve the precision of strikes; and protect soldiers, especially against improvised explosive devices or in contaminated environments. Finally, PALWS will last longer than human teams at sea, in the air, or on the ground, especially in dangerous or dirty environments, and they will therefore provide a greater permanence in a given area.

At the same time, PALWSs present a number of risks. Deploying autonomous weapons — even if they are only partially autonomous — tests the moral and social acceptability of using force without human intervention. Domestic opposition to the use of PALWS, including among the soldiers themselves, could undermine confidence in the state’s actions and legitimacy. Machine Learning may also lead to unexpected and unwanted behavior, as it raises issues on the long-term reliability of the systems.

There is also the issue of accountability: In the event of an incident (e.g., friendly fire or civilian casualties), who should be considered responsible? This is indeed one of the main criticisms directed at autonomous weapons and invoked by opponents as a reason to ask for a preventative ban. The integration of autonomy in weapon systems will inevitably make it more difficult to establish responsibility as there are many layers of control (state, manufacturer, programmer, system integrator, contractor, and military commander). Establishing responsibility will be difficult but not impossible, because an autonomous decision-making capacity does not “break the causal chain allowing attribution and responsibility,” as professor Marco Sassòli explained in 2014. Moreover, such a dilution of responsibility is not unheard of, as it is already what happens when a plane on automatic pilot crashes, or when a self-driving car has an accident.

Among other risks of incorporating PALWS, the Defense Ethics Committee identified hacking (thereby hijacking those systems); the psychological impact on humans, especially those excluded from the decision-making process or no longer able to understand what the system is doing, potentially causing a lack of involvement or a “loss of humanity” in combat; and other psychological risks such as blindly trusting the machine, losing confidence in the human ability to deal with a complex situation, and developing all kinds of cognitive biases. There is also a risk of lowering the threshold of the use of force, and a risk of global proliferation, including acquisition by non-state actors.

How PALWS Could Be Ethically Acceptable

It is essential to delineate conditions under which it would be ethically acceptable to design, develop, and deploy PALWS. This is what the Committee called the “5Cs”: command, risk control, compliance, competence, and confidence.

For each mission, PALWS should have rules set up by the human command (in terms of its target, spatial and temporal limits, rules of engagement, and other constraints); they should not be able to change those rules themselves (only human command can); they should not be able to assign a mission departing from what was initially programmed to another PALWS, or only after validation by the human command; and what they acquire through machine learning during a mission should not be used to program new tasks without human involvement.

Additionally, military personnel deploying PALWS (not only operators but also tactical leaders, theatre commanders and strategic leaders) should be prepared and trained accordingly. Similarly, any personnel involved in the design, development and promotion of those weapons (e.g., engineers, researchers, diplomats, politicians) should be made aware of the various risks and issues their use involve. Public authorities should be informed as well. Furthermore, mechanisms such as emergency deactivation or self-destruction should be implemented in the systems, in the event of a communication loss, as well as a device for aborting a mission in progress.

The French Defense Ethics Committee also recommended conducting a complete legal review whenever decision-making autonomy is developed in a lethal weapon system, “especially as far as identification, classification and opening fire functions are concerned.” Last but not least, it also advocated for international transparency.

Looking Ahead

There is nothing radically new in this French Defense Ethics Committee opinion for those closely following the decade-long, international debate on more or less autonomous weapons. Most, if not all, of these recommendations have been made by scholars and non-governmental organizations. What is interesting in this ethical opinion is that it also involves legal, scientific, and operational arguments, and that this comes from a committee set up by the French Ministry of Defense. However, what is at stake here is not just one state. The more individual states develop a clear and detailed public policy, the easier it will be to agree on a normative framework at the global level.

 

 

Jean-Baptiste Jeangene Vilmer, Ph.D., a member of the French Defense Ethics Committee, is the director of the Institute for Strategic Research (IRSEM) at the French Ministry of the Armed Forces, and a nonresident senior fellow at the Atlantic Council, Washington, D.C. He is also an adjunct professor at the Paris School of International Affairs (PSIA), Sciences Po, and an Honorary Ancien of the NATO Defense College. The views and opinions expressed in this article are the authors’ alone and do not necessarily reflect the official position of the French Defense Ethics Committee or the French Ministry of the Armed Forces.

Image: U.S. Navy