More Than a Bicycle Brake on a Missile: AI Ethics in Defense
We are seeking to fill two positions on our editorial team: An editor/researcher and a membership editor. Apply by Oct. 2, 2022.
The first casualty of future warfare may very well be AI ethics. The AI-enabled digital transformation of the defense sector will clearly not stop. In the United States, the Silicon Valley culture of rapid technological innovation, fast prototyping, economies of scale, and lean startup methodologies has increasingly influenced the institutions and programs of defense during the past decade. The new vocabulary is speed, agility, and flexibility — to achieve bigger scales, lower costs, and constant software iteration. The goals include faster procurement and acquisition, research and development, prototyping, and fielding. This requires commercial technology developed by startups and the investment of venture capital firms in dual-use technology. All to meet the demand for real-time product updates and modular, plug-in and play standards, such as the modular open systems approach used in defense acquisition in the United States.
With all this focus on agility, the tension between speed and ethical due care in war has increased. AI ethics is currently little more than a by-product of fears about near-peer competition and military defeat, which turns AI-enabled warfare into a self-fulfilling prophecy. Political will can help to restore the balance by promoting AI ethics that reflect a country’s core values and turning ethical principles and guidelines into meaningful practical arrangements. In the end, however, taking AI ethics seriously is a human choice rather than a technological fix or ethical regulation.
Speed vs. Ethical Due Care
The challenge is to make sure that new AI-enabled systems are not only safe and reliable, but also ethical. The technology should be fair, unbiased, and transparent, and should not cause any unintentional or disproportional harm or impact. In other words: AI technology needs to be efficient, but it also needs to be responsible. To properly frame this issue, it is best to approach AI ethics as a sub-field of applied ethics in which one aspect is most important: the moral considerations of AI technology in real-world, practical situations as AI is enabling new autonomous capabilities.
Faced with both the new geopolitical reality and military-strategic context in which AI and emerging technologies are redefining warfare and competition, the tendency is currently not in the direction of new treaties to address the issue. For example, while around 30 countries are in favor of a treaty to preemptively ban lethal autonomous weapons, a credible text has not appeared on the horizon. Instead, the biggest military powers are in favor of either new guidelines, more research, or the status quo — which also means that research and development is continuing virtually unhindered by an ethical debate.
Ethical Leadership of the U.S. Department of Defense
The trend is rather towards the adoption of ethical principles and guidelines without enforcement or compliance authorities. Most notably, the U.S. Department of Defense adopted its five ethical principles in February 2020, which influenced the adoption of similar principles by NATO (October 2021) and the United Kingdom (June 2022). The Department of Defense’s Joint Artificial Intelligence Center (now integrated into the Chief Digital and AI Office) organized various events with like-minded countries to discuss these principles and the broader ethical implications of increasingly integrating AI into defense.
With these principles, and the growing consensus among allies, an ethical framework seems to be emerging that can play various roles. How should we view this development? First, the ethical framework might be a stopgap solution before new compliance regulations are implemented. The ethical principles of the Department of Defense may eventually be translated into various laws and regulations, which may partly solve the tension between speed and due diligence. This, however, requires a long legislative process and then would need proper implementation and enforcement to be effective. There is no current indication of either bureaucratic or political will in this direction.
Second, the emerging framework may remain a proxy for regulation or even a call for deregulation. In this case, the ethical principles represent more a form of self-regulation, but are nonetheless important to note. In fact, because of their performativity, publishing the ethical principles meant that the Department of Defense was making a clear statement: that it is taking ethics seriously. Of course, one could judge the principles as mere “ethics-washing,” but the fact that the commitment was made publicly arguably now forces the Department of Defense to report on progress and to be accountable for any discrepancies between policies, practices, and these principles.
Third, as witnessed in initiatives such as AI and Data Acceleration, it may require the Department of Defense to be strict with commercial AI system providers who need to make sure that their products are trustworthy and explainable. This offers opportunities for Explainable AI companies such as CalypsoAI, Fiddler AI, Robust Intelligence, and TruEra, which can plug products into defense systems to decrease the challenge of AI’s “black box” decisions. So far, however, it is difficult to make explainability a legal requirement for any procurement, development, or use of AI defense systems as the very concept of explainable AI is still under development. Explainability alone might also never guarantee ethical or responsible AI, as AI systems are used in highly complex and unpredictable battlefield environments.
Fourth, the emerging ethical framework may encourage other countries to adopt similar principles. As the framework starts to encompass all NATO allies, the commitment to ethics becomes stronger. NATO’s plan to release a practical strategy for the use of autonomous systems is a case in point. To countries outside of the block, it can show that NATO is claiming the moral high ground when it comes to AI ethics and defense. Even if other countries do not themselves see the importance, they may still be forced to take AI ethics seriously if they depend on technology and AI enhanced defense systems or dual-use technology produced by countries that do have these ethical principles. In any case, critics will ask a relevant question: What is the value of a moral high ground if Russian tanks are at your doorstep? Part of bridging the gap between speed and ethical due care is coming up with a convincing public diplomacy strategy that shows AI ethics is, as a reflection of a nation’s core values, much more than a by-product of comparative military strength. It has an inherent value that should not be part of a hypothetical equation based on what China or Russia might or might not do.
Implementation
When it comes to solving ethical challenges in the defense space, there are no magical solutions. The effects of ethical principles, guidelines, or frameworks will play out differently in different contexts, conflicts, and situations. Still, without some kind of implementation, these ethical standards don’t leave the realm of philosophical arguments.
The ethical principles of the Department of Defense and the emerging ethical framework that they contribute to are only meaningful if the United States and allies walk the talk. This means implementation of these principles in meaningful arrangements, whether related to the research, design, and development of AI systems or their deployment and use. While the ethical principles were announced with fanfare, there have so far not been public statements related to implementation. The best effort so far has been by the Defense Innovation Unit, which published responsible AI guidelines in November 2021 to translate ethical principles into meaningful arrangements, from integrating them into the entire technological lifecycle to validating that each prototype or project is fully aligned with them. It remains unclear, however, whether such guidelines can help to bridge the gap between speed and ethical due care.
The question is also whether there is an authentic wish to practice what the ethical principles preach. This is not only an American challenge. In the United Kingdom, for example, the Defense Artificial Intelligence Strategy published in June 2022 was very clear about its ambition: The United Kingdom’s approach “will enable – rather than constrain – the adoption and exploitation of AI-enabled solutions and capabilities across defense.” Such statements immediately weaken the normative power of AI ethics.
In addition to principles and guidelines, there is ongoing research into the practicalities of ethical AI, such as the Defense Advanced Research Projects Agency’s Explainable AI project and the Warring with Machines project of the Norwegian Peace Research Institute Oslo. There are also rapid advances in dual-use technologies related to understandable and trustworthy AI, some of which defense organizations are already benefiting from. The technologies are used to check AI systems for bias, mistakes, and adversarial behavior, and to track how and why a system came to a given conclusion. While this is important from a military operational viewpoint, such system components also offer great potential to solve ethical challenges, especially if they were to become mandatory.
Advances in AI ethics in defense are currently mostly inward-looking and lack transparency. This brings an additional challenge to AI ethics in defense, as its purpose is generally to solve AI or information asymmetries. Such asymmetries are inherently part of the defense sector, whether in the form of the competition with near-peer rivals, the information flows going from governments to national parliaments and eventually to citizens, or within the military chain of command.
All this means that dealing with AI ethics within the defense sector will always be a partial solution. It will not address the bigger ethical issues of how defense fits into modern democratic societies and under what circumstances it is ethical to still go to war in the 21st century.
What Is the Future?
Militaries are very good at scenario thinking and forecasting. This has so far resulted in concepts like networked-battlefield (or network-centric warfare), algorithmic warfare, hyperwar, mosaic warfare, and software defined warfare. AI ethics has been absent from all of that thinking, including from America’s current bet to meet the requirements of those scenarios: the Joint All-Domain Command and Control. The reason is clear: The objective of military superiority and the threat of near-peer competition are the dominating thrust of the defense technology debate. This was the main narrative and conclusion of the final report of the National Security Commission on Artificial Intelligence: The United States needs to do better in the face of the potential threat of China gaining decisive AI overmatch. The core recommendation was clearly not that the United States should get its ethical checks and balances in order before designing and deploying more AI-enhanced systems.
This one-sided approach is a structural problem and reinforces the tension between speed and ethical due care. Calling the delivery of AI capabilities to the warfighter a “strategic imperative,” insisting that the United States “must win the AI competition that is intensifying strategic competition with China,” or stressing that the United Kingdom “must adopt and exploit AI at pace and scale for defense advantage” may be logical rhetoric from a national defense perspective, but it widens the gap between AI ethics and the nascent but fast-growing incorporation of AI into the military. Within such a political mindset, there is simply nothing that AI ethics can do to keep up the pace.
What Can Be Done?
The road traveled so far by the Joint Artificial Intelligence Center and the Defense Innovation Unit is positive but only a starting point. It is necessary to add more resources to the promotion of responsible AI, common interests, and best practices on the implementation of AI ethics in defense among allies. More alignment can eventually help to solve two core challenges of AI ethics: the lack of consensus about definitions and concepts, and the lack of understanding on how principles should be translated into practice. This requires not only more research but also more sharing of the results, information, and best practices. It means expanding the group of like-minded nations beyond NATO. This is not easy. The stark international divisions in the debate on lethal autonomous weapon systems is a case in point, but that debate also shows that there is much common ground, for example relating to the principle of meaningful human control.
Second, claiming the moral high ground will not win wars, but can be very important in the broader scope of competition. The more that AI ethics starts becoming an effective reflection of core values and less a by-product of calculations of comparative military strength, the more positive spill-over effects there will be in terms of the country’s soft power. This requires a long-term perspective and bold strategy that go beyond the short-term alarmist messages about “the rise of near-peer competitors.”
These first two routes will not solve the biggest challenge of all: conflicting geopolitical and military interests that will continue to prevent AI ethics from being dealt with in the defense space. This inherently means limits to the leverage that AI ethics can have on ”hyper war.” Talking about an AI arms race may sound alarmist, but it is what we are facing as countries boost investment, research and development in AI, and emerging technologies in the face of perceived existential risks.
To solve this bigger challenge, there are no shortcuts. Neither international legislation nor ethical regulation will offer silver bullet solutions. The only hope is ultimately not in technological fixes, but rather in human beings. AI is inextricably linked to humans, from design and development to testing and deployment. Human values, norms, and behaviors can be coded into AI and are part of the broader frameworks and systems within which AI is being deployed and used. In order to safeguard ethics, we need to incorporate ethical principles in AI systems from the start, but that only gets us halfway.
We should also decide as human beings where to draw the line on AI incorporation in defense systems and how to choose “AI for good” over “AI for bad.” However, as long as countries only focus on the need to compete militarily, AI ethics will remain, to paraphrase the late German sociologist Ulrich Beck, the bicycle brakes on a hypersonic missile.
Jorrit Kamminga, Ph.D., is director of RAIN Ethics, a division of the RAIN Research Group, an international research firm specializing in the nexus of AI and defense.
Image: Close Combat Lethality Task Force by Alexander Gago.