Kicking off a New Approach to Cyber Ethics at the Department of Defense
When I worked at the U.S. Army’s Cyber Command, I received an abundance of training meant to prevent unauthorized disclosures of classified information. These limits are relatively clearly defined in law and policy and anyone with significant experience in the intelligence community or military is well versed in what information is classified and what is not. Export control regimes, such as the International Traffic in Arms Regulations and Export Administration Regulation, further restrict the dissemination of certain cyber and intelligence capabilities without a license. But there is a whole constellation of cyber expertise that is neither classified nor subject to export controls. And while ethics should always be at the forefront of a professional’s mind, they become even more important in the absence of law and regulation. The government and industry ought to do better in training employees to navigate these dilemmas.
In this context, recent reports of former U.S. government employees providing surveillance capabilities to the United Arab Emirates, a program known as Project Raven, have been troubling. The expertise of former U.S. government personnel allowed a foreign government to spy on human rights activists and even fellow Americans.
Project Raven crossed an ethical line (and has potential legal implications) but there are other cases — notional and real — that are less clear. Should a Western cybersecurity company ever do business with the Chinese government or other authoritarian regimes? What is the most responsible way for an engineer to release a technology that could have offensive applications? In many cases, these choices could have serious impacts on our cybersecurity and privacy even if they don’t involve the transfer of defense technology or military training that would trigger export controls.
Of course, this problem is not new nor is it limited to the cyber domain. Private military companies recruit former military members of all stripes to use their training and combat experience for contracts that may be in support of foreign countries or private companies. Ethical dilemmas often arise even when companies are licensed to export military technology or knowhow. It was not long ago that Buzzfeed reported about a U.S. security company that employed former U.S. special operators to run a targeted killing program in Yemen on behalf of the United Arab Emirates.
Cyber capabilities, however, are especially prone to proliferation, and Congress has shown interest in this issue for several years. In a 2014 hearing, Rep. Mike Conaway noted that when “[w]e train an infantryman to use an M-16 … It’s pretty clear that when they leave, they don’t take that weapon with them back into the private sector.” While it’s true that an M-16, or variant thereof, may be relatively easy to acquire outside of the military (as are many offensive cyber tools), the ethical and legal limits on its use are much more clearly defined. Anyway, Adm. Michael Rogers, the then-director of the National Security Agency and commander of U.S. Cyber Command, responded that employees are reminded that they are trained “under a specific set of authorities for a specific mission. And it’s not legal or appropriate to use this otherwise.” The problem is precisely that most operational ethics training is focused solely on ensuring that current employees do not use cyber capabilities outside of their specific mission while in uniform, and the military eschews any responsibility for preparing servicemembers for the ambiguous situations they will face outside of government service.
Congressional attention has intensified following the revelations about Project Raven. On May 20, a bipartisan group of lawmakers sent a letter to the secretary of state and director of national intelligence expressing their concern over this proliferation and requesting a strategy to prevent further abuses. Congress also recently passed legislation that would increase the transparency of the State Department’s process to approve the sale of cyber capabilities to foreign entities. Increased congressional oversight is a step in the right direction, but the Department of Defense (and other agencies and companies, for that matter) can take several steps immediately under their existing authorities.
While the U.S. government has relatively robust ethics regulations and training for current employees, they are almost entirely focused on preventing financial conflicts of interest. There is no focus on the potential concerns surrounding employees using their unclassified knowledge gained during government employment to support ethically dubious cyber or surveillance programs on behalf of a foreign government or even a private company.
The current training also largely focuses on senior officials. This focus makes sense when the intent is to prevent financial conflicts of interest, as senior officials have the most leverage over procurement decisions. Foreign governments or corporations developing cyber capabilities, however, are often interested in the technical and operational skills possessed by junior to mid-level employees. In the U.S. military, for example, these are mostly the enlisted soldiers (and some officers) trained as operators (A.K.A. hackers) and intelligence analysts. Simply put, if a company wanted to develop an exploit (software that takes advantage of a vulnerability in a computer system) in-house they would be better off hiring a 25-year-old hacker with six years of experience over a former director of the National Security Agency.
Young employees with technical skills are also increasingly less likely to stay in government for an entire career. This means that a pool of cyber talent is entering the private sector with limited training on navigating the ethical dilemmas of their new profession.
While Congress considers legislation to increase oversight of the proliferation of cyber capabilities, the Department of Defense and other U.S. government agencies can take two immediate steps to remedy some of these concerns.
First, more robust cyber ethics training should be included in the ethics training already provided to Department of Defense and other government employees, especially those leaving government service. As a former victim of government mandatory training myself, I’m reluctant to suggest that my former colleagues should undergo additional training, especially if it is death by PowerPoint. It’s well documented that the military is already overwhelmed with requirements, so many in fact that one study concluded it was impossible to complete all of it within a year. But so much of it is, frankly, not necessary. Employees leaving government service after serving in a cyber or intelligence capacity ought to be given ethics training that covers more than just financial conflicts of interest. This additional training could be incorporated into the ethics and security briefings that already occur when employees depart without needing to significantly increase the length of the training.
What should the military teach its personnel — both civilian and uniformed — about cyber ethics? The Department of Defense should not be in the business of drawing the line between ethical and unethical behavior in the private sector. What the Department of Defense can and should do, however, is provide its employees with a framework for ethical decision making in this space. To start with, employees should continue to be trained on the laws and regulations governing classified information and export controls to enable them to distinguish between situations in which disclosing their cyber knowledge would be illegal and when it would be legal, but possibly unethical. To address this second and much more nuanced category, the training should teach a general ethical framework that employees can apply broadly to any situation. There is no need for the government to develop this framework on its own. The Association for Computing Machinery already has a well-developed Code of Ethics and Professional Conduct that outlines 25 ethical principles for technology professionals including to “avoid harm,” conduct “thorough evaluations of computer systems and their impacts, including analysis of possible risks,” ensure access is authorized or “compelled by the public good,” and “design and implement systems that are robustly and usably secure.” In accordance with the association’s principles, the training should use case studies to teach employees to evaluate a potential employer’s mission and contracts, consider the human rights records of their customers, and understand any potential malicious or offensive applications of technologies they may create in the course of their employment. Adopting such a framework and training regime will equip employees with the ability to ask the right questions as they navigate their post-government employment.
Ideally, the government would adopt the Association for Computing Machinery’s code of ethics, or something similar, and require that all employees serving in applicable intelligence or cyber roles receive instruction on it. The Department of Defense should adopt such a code immediately while waiting on legislative or executive action that would extend it to all applicable agencies, especially the 17 members of the intelligence community. There is precedent for this, as President Barack Obama directed all agencies to adhere to the U.S. Army’s interrogation field manual in a 2009 Executive Order. Given that the Department of Defense is by far the U.S. government’s largest cybersecurity employer, with over 6,200 personnel in U.S. Cyber Command’s Cyber Mission Force alone, it makes sense to start this effort there.
Second, former employees should have a simple way to access ethics advice after leaving government service. Some agencies already provide this, yet it can be a cumbersome process. The process would preferably involve a web-based platform where former employees could submit questions easily and receive timely responses. The easier this process is, the more likely that former employees will utilize it prior to making a potentially unethical or illegal decision.
The advice would necessarily focus on reminding an employee of the regulatory, export control, and legal concerns applicable to their specific situation. Former employees should not be required to seek this advice prior to accepting civilian employment nor should any advice be in itself legally binding, but former employees could of course be subject to civil or criminal penalties if they ignore this advice and subsequently violate the law. In more ambiguous situations, the government’s response should be to point the employee back to the resources used in their training and encourage them to follow that ethical framework. The focus should not be on further restricting the post-government activities of employees, but rather to provide a support system that enables more ethical decision making. As former intelligence officer Phil Caruso has noted, banning former employees from seeking employment in certain sectors would only make recruiting cyber talent more difficult as “in a free society, potential recruits expect to have the freedom to leave government service.”
These are just two examples of actions that the Department of Defense and other government agencies could take now within their existing authorities. The cybersecurity industry should similarly follow suit (as some surely already do). It’s not just the right thing to do; it’s a smart business decision. An employee involved in a malicious cyber incident is a liability to both their current and former employers and can cause significant reputational damage to the company. The impact of the public outrage that accompanies a cyber incident is an oft-underestimated factor in the cyber risk equation, and employee training is one relatively simple way to reduce that risk.
Whatever the solution, the U.S. government cannot afford to abide by the old refrain that technology is neutral and hope for the best. Technology cannot be divorced from its uses to avoid moral responsibility. The moral responsibility lies with those who build and operate the technology, and the government agencies and companies that train and employ them should take a more active role in promoting the conversation around ethics.
This won’t be a hard sell for the vast majority of the men and women who work at Cyber Command, the National Security Agency, technology companies, and in the various other parts of the cybersecurity and intelligence communities. All but a few want to use their technical skills to advance the common good, and their employers should help ensure that they have the tools to do so. The most important tool in that box is an understanding of how to approach the moral and ethical implications of their work.
Stuart Caudill is a former U.S. Army officer who served in various intelligence and cyber assignments. He is currently pursuing a Master of International Affairs at Columbia University as a Tillman Scholar.