A Heterodox Conclusion on Intelligence Failures in the Age of Cyberwarfare

cyber-people

Former CIA acting director Michael Morell defined the Russian hacking attack of the Democratic National Committee as “the political equivalent of 9/11.” The event constitutes a classic example of a warning failure. Such failures, as attested to by the rich literature on Pearl Harbor, Barbarossa, the Korean War, the 1973 Arab-Israeli War, and the 9/11 terrorist attacks, are not the product of insufficient information about the looming threat. Rather, they are the result of mistaken interpretation of available information. The New York Times investigation of the Russian intervention concluded that the American response to the attack was shaped by “a series of missed signals, slow responses and a continuing underestimation of the seriousness of the cyberattack.” This conclusion fits very well with the classic causes of major warning failures of the past. While the means of surprise attacks has shifted from airplanes and tanks to email accounts and computer networks, the dynamics between the initiator of the attack and its victim have remained very much the same.

Information about the Russian hacking is still relatively scant, but we know enough to hazard some initial observations and one main conclusion.

The first observation involves the identity of the enemy. In the era of conventional warfare, intelligence agencies were tasked with answering concerning the “if,” “how,” “where,” and “when” of the attack — but the potential attacker’s identity was known. However, while much has been written about the difficulties involved in attributing specific attacks to specific states in the age of cyberwarfare, this was not a problem in the present case. By September 2015, the FBI already knew that the Russian cyberespionage group known as “the Dukes” was hacking the computers of the Democratic National Committee, and the record of “the Dukes” as a tool of the Federal Security Service (FSB) was in the public domain.

This raises the issue of how Moscow viewed risk and the effectiveness of such an attack. The Russians have a long tradition of psychological warfare — “active measures” in the KGB terminology. Under Putin, they turned “information warfare” into a dominant component of their “new generation warfare.” We might have expected Russian policymakers, including Putin, an experienced KGB officer, to be more concerned about keeping its responsibility in the dark. Yet, Putin approved the operation despite knowing that the gun had been “smoking” for a long time, so he was very likely to be caught red-handed. A fear of American retaliation apparently did not play a significant role in the Kremlin’s strategic calculations. This attitude is different from the more risk-averse tendency of the Soviets during the Cold War. The KGB, just like the CIA, used various means of subversion to influence the political processes in third world countries as well as in Western Europe. However its disinformation campaigns in the United States were largely limited to smearing the CIA and the FBI or flaming conspiracy theories about the Kennedy assassination. The KGB had never interfered with the presidential elections.

When the hacking started in July 2015, no one could have predicted that the American public response to a Russian intervention in the presidential elections would be so feeble. But the American response to the attack clearly resembles past responses to warnings of an incoming attack. Specifically, three well-known common factors can already be identified.

First, standard operating procedures allow organizations to function effectively under routine situations, but may be disastrous at times of emergency. This was evident in 1941 when  standard compartmentation procedures led to the information about a possible Japanese attack arriving in Pearl Harbor partial and unprioritized. Close to 75 years later, the FBI acted according to its own stand operating procedures in repeatedly sending routine warnings to the relevant official in the Democratic National Committee without even trying to meet and alert him personally. We do not know yet when the intelligence community started to realize the scope of the threat, but we do know that it did not bubble up to the top until far too late in the game. Situation Room meetings started only in July 2016, and intelligence assessments of the Russian role in the attack “took forever,” according to one unnamed senior administration official.

Second, much as with Pearl Harbor and 9/11, intelligence analysts did not imagine that the enemy would strike in the way it did. In 1941, Japanese attack on Pearl Harbor was considered impossible. In 2001, the destruction of the symbols of American might by passenger planes was unimaginable to most. Similarly, despite what Western intelligence agencies observed of Russian information warfare in Estonia, Georgia, and the Ukraine, David Sanger claims “American officials did not imagine that the Russians would dare try those techniques inside the United States.” The result was the underestimation of two threats: that Russia would leak the information it hacked in order to disrupt the electoral process and that the documents would be used to effectively attack Clinton’s candidacy.

Third, President Obama’s low-key response to the threat, which deterrence theory would suggest incentivized more Russian aggressiveness, resembles both the Munich crisis of 1938 and Stalin’s thinking on the eve of the German attack on the Soviet Union in June 1941. Both cases were dominated by fear of escalation. In 1938, Chamberlain and Daladier accepted Hitler’s territorial demands in Czechoslovakia in exchange for “peace in our time.” In 1941, caught in his own distorted logic, Stalin refrained from taking defensive measures that could have triggered a German attack. Obama’s refusal to publicly accuse the Russians for interfering with the American democratic process before the elections was largely motivated by fear of Russian retaliation. Some officials were concerned over escalation into a larger cyber-conflict, while others worried a U.S. response would compromise diplomatic efforts over Syria. Still others thought that an official attribution to Russia would only feed Donald Trump’s narrative of a “rigged” election.

We may never know whether Obama made the right decision. The administration ultimately decided on a delayed half-measure: a warning delivered to the Kremlin a week before the election over the so-called “red phone” meant for nuclear crises. Administration officials claimed Obama’s warning to Putin included a remark that the “law of armed conflict” applied to cyberspace and that Russia would be held to that standard. This warning might have prevented Russian interference on election day itself, but does not seem to have accomplished anything else. If there is a single important lesson to be drawn from comparing past warning failures to the present one, it involves intelligence collection.

The common wisdom at the age of cyberwarfare is that “Stuxnet worms,” “Trojan horses,” and “firewalls” are the weapons by which the next war will be won.

Given these assumptions, the actual lessons that history has to teach may be counterintuitive. They would show that while the United States had always relied on strategic warning obtained through technical means of collection, this form of intelligence-gathering was repeatedly revealed as futile. In 1941, the breaking of the Japanese diplomatic code (“Magic”), which allowed American intelligence to read Japanese diplomatic cables, gave no indication that Japan’s target would be Pearl Harbor. The Japanese were aware that no code was uncrackable and thus kept all mention of Pearl Harbor outside their diplomatic traffic. Their fleet sailed to Hawaii under complete radio silence. Similarly, in fall 1950, U.S. signals intelligence assets and aerial photography failed to locate a 260,000-soldier Chinese army in Korea. The Chinese lacked radio equipment and moved into the battlefield at night using side roads. In October 1962, the United States was surprised by the positioning of Soviet attack missiles in Cuba. The surprise resulted partly from the fact that the ships that carried them took strict deception measures. In August 1990, U.S. spy satellites could observe every Iraqi tank on the borders of Kuwait. Nevertheless, the United States was surprised when they moved in. And prior to 9/11, the monitoring of warning indicators by sophisticated means of surveillance did not suffice to generate to the necessary action that would save America from the greatest terrorist attack in history.

At the same time, history also shows that human intelligence, the oldest form of espionage, provided the best warnings. The Soviet Union received numerous warnings from its spy networks all over the world that Germany planned to attack in June 1941. It sufficed to convince the Red Army generals, but not Stalin, who preferred to trust Hitler rather than his spies. A few months later, he did better. He was informed that Japan would not attack Siberia by his agents in Japan: Richard Sorge, who was the German ambassador’s confidant in Tokyo, and Hotsumi Ozaki, who worked as an advisor the Japanese prime minister. On the basis of this information, he rushed the Siberian divisions to the west and was able to win the Battle for Moscow. In 1962, Col. Oleg Penkovsky of Soviet Military Intelligence provided the CIA with the information that ultimately allowed the United States to identify the nuclear missiles in Cuba. This was more proof that a single spy can be more valuable than a massive technical surveillance apparatus. And in 1973, only a last-minute warning from Ashraf Marwan, the most valuable spy Israel had ever had, saved the country from being completely surprised by sudden Arab attack and prevented the fall of the entire Golan Heights to Syria.

This short history does not aim to discredit value of collection by technological means. But in this sphere, the opponent is usually aware of the fact that its secrets might be compromised and can act accordingly. The British code-breaking of the German strategic code in World War II (“Ultra”) was a major achievement that helped the allies to win the war. But when the German navy added a fourth rotor to its Enigma cypher machine, the British failed to read its traffic for ten months, a failure that “threatened disaster” to the allies’ chance to win the battle for the Atlantic. Today’s technology changes far faster, and in the ongoing race between offensive and defensive cyber-espionage, any offensive advantage is likely to be short-lived. This is not true in the realm of human spies, where no counter-measures can assure capture.

The American intelligence community is known for its outstanding technical collection capabilities, but remains far weaker in human intelligence. It had no valuable spy in Tokyo in 1941, in Moscow and Beijing in 1950, in Baghdad in 1990, or in al Qaeda in 2001. Penkovsky, who played a crucial role in the 1962 crisis, was brought to the CIA by the British. This sad record shows that despite being technologically superior to every other nation, the United States might be ill-equipped for war in the cyber age for the counterintuitive reason that its human intelligence capabilities are not robust enough. The rise of the age of cyber conflict does not obviate the imperative for well-placed human sources. Perhaps the United States should direct more of its efforts in that direction.

 

Uri Bar-Joseph teaches at the Department of International Relations, Haifa University, Israel. His most recent book is The Angel: The Egyptian Spy Who Saved Israel (HarperCollins, 2016). His forthcoming book, with Rose McDermott, is Intelligence Success and Failure: The Human Factor (Oxford University Press, 2017).

Image: NIIST and ClipArtBest