Buggy Whips and Segways: Historical Misinnovation in National Security and Intelligence Technology
In the 1991 film Other Peoples’ Money, Danny DeVito plays Lawrence Garfield, a corporate raider hell-bent on acquiring and dismantling a cable-and-wire manufacturing company. (He’s essentially Richard Gere in Pretty Woman without the change of heart.) In the climactic scene — a speech to stockholders — Garfield observes:
You know, at one time, there must have been dozens of companies making buggy whips. And I’ll bet the last company around was the one who made the best goddamn buggy whip you ever saw. Now, how would you like to have been a stockholder in that company?
Garfield’s sentiment surrounds us today as “innovate or die,” an aphorism so ubiquitous it spawned its own meta-study. This trend pervades education, sports, and is the very essence of the tech industry. Innovation of national security and intelligence technologies is particularly compelling. Failure to innovate courts disaster, especially in combat or clandestine operations. No one wants to find themselves buggy whip in hand, confronting a superior adversary innovation beneath the waters of Charleston harbor or in the skies over Germany. Some underdogs might prevail, but it’s still better to not be the underdog.
The dichotomy of “innovate or die,” however, assumes that we are destined to do either one or the other. But one can innovate and die or misinnovate and die. While there certainly are ample “innovate or die” examples in intelligence history, there are also ample examples of innovations — or what we might a call misinnovations — yielding equally disastrous results. Innovation scholar Adam Grant notes,
Our companies, communities, and countries don’t necessarily suffer from a shortage of novel ideas. They’re constrained by a shortage of people who excel at choosing the right novel ideas.
Put another way: In the military and intelligence worlds, even when good ideas are in short supply, “good idea fairies” are everywhere.
This essay proposes two major paths of misinnovation: “buggy whip” misinnovation (innovating along a well-established technological arc for too long) and “Segway” misinnovation (mismatching innovation to a need, often by jumping on the wrong technological arc).
“Buggy Whip” Misinnovation: Riding a Technological Arc for Too Long
Garfield’s aforementioned “best goddamn buggy whip” is more than just a failure to innovate — it is a form of misinnovation. In order to be the best, this whip must have benefited from progressive innovations: better materials, superior handle-to-whip ratios, improved whip assembly practices. Buggy whip misinnovation, then, is innovation that rides out a technological arc for too long. It is akin to the maxim attributed (apocryphally) to Henry Ford: “If I had asked people what they wanted, they would have said faster horses.”
Military history is rife with such examples. Japan’s Yamato–class battleships and the cancelled American Montana-class represented the pinnacle of a naval technology already rendered obsolete by the aircraft carrier. During the Cold War, the U.S. Air Force wanted strategic bombers like the B-58 Hustler and XB-70 Valkyrie to streak into the Soviet Union at speed and altitude — long after improved Soviet radars and surface-to-air missiles shattered this paradigm. Saddam Hussein’s 1980s “super-gun” project pushed traditional tube artillery technology to unattainable extremes when aircraft and rocket technologies were sounder investments.
The intelligence world also boasts its buggy whips. During World War I, the British developed “sound mirrors” – large, concave concrete structures designed to amplify the sound of approaching aircraft to enable detection beyond visual range. In a world of Gotha bombers, such devices were effective and offered precious minutes of advanced warning of aerial attack. Yet construction of these devices continued well into the late 1930s. This is prime example of buggy whip misinnovation — not just because radar rendered these systems obsolete, but because the speeds of all-metal monoplanes had already reduced the warning timelines of these concrete behemoths to the point of uselessness.
At the risk of committing blasphemy (one of us is a retired Air Force officer and the other a proud member of the Smithsonian Air & Space Society), the Blackbird aircraft family — including the A-12 OXCART and SR-71 Blackbird — was, arguably, a collection of buggy whips. Record-shattering, eye-watering, speed-personified buggy whips, to be sure, but buggy whips nonetheless. Evolving collection norms meant deep penetration reconnaissance flights into Soviet, Warsaw Pact, or Chinese airspace were not going to continue after May 1960, yet this was the driving rationale for the OXCART program and the fantastic technical leaps, resource investments, and support infrastructure it required. These aircraft yielded remarkable technological successes and some operational utility, but they did so at incredible expense, serious risk to aircraft and crew, and with ever-increasing redundancy in most mission applications compared with satellites and other (cheaper) manned aircraft.
Another twist on buggy whip misinnovation in overhead reconnaissance was the Manned Orbiting Laboratory, a failed four-year, $1.3-billion-dollar effort to orbit a manned U.S. intelligence gathering space station. (The Soviets actually fielded a short-lived scheme along these lines.) Looking back, the idea of “spies in space” seems ludicrous given what we now know about the evolution of strategic reconnaissance from manned aircraft to digital imagery and signals intelligence satellites. Yet this technological arc is more obvious in the rear-view mirror. By the early 1960s, manned aerial reconnaissance provided relative speed and flexibility in intelligence collection, while first-generation intelligence satellites offered superior coverage areas and political permissiveness. At the time, the Manned Orbiting Laboratory seemed (wrongly, as it turns out) to promise the advantages of manned reconnaissance combined with the collection area and political feasibility of satellites.
“Segway” Misinnovation: Mismatching Innovations to Needs
The ill-fated Segway is a noteworthy commercial misinnovation. In his Originals: How Non-Conformists Move the World, Grant cites the Segway as a prime example of how not to innovate. The Segway is a remarkable piece of technology in its own right. Yet it flopped as a consumer technology because it was premised on a need that did not exist: There was no appreciable consumer group willing to spend several thousand dollars to replace their own two feet.
“Segway” misinnovation often involves leaping aboard the wrong technological arc, failing to meet actual needs. Military technologies again offer parallel examples: the U.S. Navy’s flying aircraft carriers of the 1930s, Air Force parasite/trapeze escort fighters of the 1940s-1950s, nuclear-powered bombers, and the Soviets’ ground effect “Caspian Sea monster” (to name a few). Each innovation seemed to offer utility. Many, like flying aircraft carriers, combined proven technologies in an unconventional manner. Yet each ultimately failed to meet end-user needs, often for reasons of practicality mirroring the Segway experience. Parasite escorts were technically feasible, yet they were dangerous and, ultimately, redundant once aerial refueling technology matured. Ground effect vessels like the “Caspian Sea Monster” offered an innovative means of transport, just not one that yielded any kind of practical military advantage. (Even more ridiculous examples of poorly conceived aerial ordnance innovations are well-known to national security nerds: bat bombs, cat bombs, pigeon bombs, and porn bombs.)
There is no shortage of intelligence Segways. CIA attempts to implant a housecat with eavesdropping electronics and release it into a target area rank as a favorite. Like the Segway, while technical aspects of the project showed promise, the experiment ended when the cat ran into a street and was run over by a taxicab, exposing the core fallacy of this misinnovation: Cats rarely (if ever) do what you want them to do. Another example is the XM-series of personnel “sniffers” deployed in Vietnam. These detected ammonia in order to identify groups of people in dense vegetation. Though functional, the sniffers faced significant operational troubles with false positives when operated in handheld configuration and were vulnerable to enemy fire, spoofing, or natural and manmade false positives when operated in the aircraft configuration.
A less ridiculous but still less-than-successful Segway misinnovation in Vietnam was the IGLOO WHITE program. This involved dropping hundreds of modified sonobuoys across suspected overland infiltration routes to detect Communist forces and logistical movements. The program suffered myriad difficulties: sensors dropped in uneven patterns, exposure of emplacing aircraft to enemy fire, short sensor operational life, VHF interference (including from U.S. radio-jamming supporting nearby airstrikes), false negatives from artillery detonations and aircraft noise, etc. Ultimately, the program was a Segway misinnovation because, even when the technology did what it was supposed to do, the larger concept of employment was flawed: No available combination of sensor coverage, relay aircraft, data interpreters, and strike assets was ever going to meaningfully reduce Communist infiltration into South Vietnam.
Misinnovation Drivers
Misinnovation is nothing new. Martin van Creveld describes Renaissance-era Segway misinnovations in his seminal Technology and War: Engineers obsessed with elaborate combinations of cylinders, pistons, cogwheels, cams, screws, and bevel gears designed many complicated machines. In the hope of getting funds from the powers of the day, they imagined scythed chariots, crank-propelled tanks, oared submarines, and flying machines with flapping wings. Few, if any, were practical.
Like these Renaissance engineers, modern leaders are incentivized to innovate. Because change is easily demonstrated in real time (though truly successful innovation may only be understood as such in retrospect), there is impetus to “do something” to climb the proverbial ladder — or merely to maintain position on one’s current rung. Agencies and defense contractors alike are incentivized to innovate, occasionally for the mere appearance of innovation, to justify renewed (or expanded) budgets, to convey customer or taxpayer relevance, or to offer the promise of competitive advantage to backers.
Drive to embrace and demonstrate innovation is particularly strong in social environments that see maintenance or gradual improvement of extant capabilities as stasis (or retrograde). Current U.S. innovation culture is such an environment. TED talks and various “innovation”-themed events and industry concepts extol innovation; some seem to preach it as an end unto itself. A 2016 episode of the podcast Freakonomics dealt with innovation over-exuberance (and its deleterious effect on maintaining perfectly good, extant capabilities). In it, two scholars took exception to an innovation “fetish” that produces, in extreme form, “a mountain of dubious scholarship and magical thinking.” One permutation of this within intelligence circles involves jumping on various management fads over the years whether the government context actually fits the model or not.
Organizational factors also drive Segway misinnovation. Relatively short tour lengths for senior leaders in the military and intelligence worlds mean senior leaders take a job knowing they will have only two to three years to make an impact. Further, the Department of Defense rarely provides funds for any new idea in less than three to five years owing to the need to build a programmatic wedge into the Future Years Defense Program. The only things senior leaders truly control during their tour are reorganizing and refocusing efforts. If they need new tools, collection, or processing capabilities, they won’t show up until the next (or after-next) commander arrives. Much of this exacerbates what Steve Blank refers to as the “Red Queen problem,” as new disruptive technologies (and adversary applications thereof) outpace our own leadership rotations and traditional acquisition timelines.
The military may foster buggy whip misinnovation by pursuing service-specific, in-house “innovations” that meet a discrete need via dead-end, evolutionary technology — developed at exorbitant cost. As one of us experienced, the 1980s U.S. military intelligence community recognized the need for “secondary imagery dissemination” systems to transmit digital imagery products to field commanders to support dynamic planning, execution, and assessment of combat operations. Each service sent these requirements to their research and development and acquisition communities. This eventually resulted in systems like the Air Force’s Intratheater Imagery Transmission System: a finely tuned fax machine built to military ruggedness — and associated high cost. Meanwhile, private sector technology explored better ways to transmit digital data, resulting in internet protocol routers and fiber-optic transmission. By the time of Operation Desert Storm, the military turned to interoperable commercial off-the-shelf solutions for imagery dissemination systems, scrapping their in-house Buggy Whips.
Innovate without Misinnovating
The Department of Defense have several initiatives to foster innovation and avoid misinnovation. DARPA makes pivotal investments in breakthrough technologies. We have the Defense Innovation Unit – Experimental (DIUx), and parallel efforts in the services and the intelligence community (like In-Q-Tel and IARPA). How can these efforts lead to useful innovation and avoid misinnovation?
While senior leader involvement is important, successful innovation requires bottom-up engagement with operators and analysts. Some of the most remarkable national security innovations originate through creative modification in the field. By contrast, may historical misinnovations involved “innovations” that were not field-practical. DARPA and the Air Force Battlelabs consciously engaged senior leaders and warfighters to ensure innovation efforts focused on real mission/operational needs. The Battlelabs sought creative ideas from Air Force operators, developed these with unconventional technology entrepreneurs, and demonstrated possible solutions for the operators. This built bottom-up structures with strong horizontal and vertical components.
Successful defense innovation efforts must accept more up-front risk of failure than in traditional development and acquisition processes. Failure in the Defense Department and the intelligence community can pose extreme costs for limited investment resources. These organizations cannot accept the estimated 90-percent failure rate of Silicon Valley start-ups. Intelligence budgets, while reasonably large, cannot and will not survive a Vegas-like approach to securing return on investment. The Battlelabs expected a 20 percent failure rate on demonstration projects. Innovation efforts can mitigate such risks through small venture capital investments and limited trial purchases.
As Patrick Ryan notes, innovation sponsors must provide meaningful transition vehicles for successful projects. While efforts such as DARPA, In-Q-Tel, and DIUx facilitate rapid fielding of one-off prototypes, the Defense Department must provide sufficient logistical infrastructure and ensure — as Jacquelyn Schneider recommends — corresponding innovations in training, doctrine, operations, and strategy. Similarly, no sensor or collection platform is quite so useless as one for which there is insufficient processing and exploitation. As intelligence scholar Mark Lowenthal laments,
builders of collection systems often ignore P&E and launch costs as part of their estimates for collection… akin to calculating the price of a new automobile without thinking about fuel, maintenance, and insurance.
Innovators may reflexively gravitate toward the “sexier” front end of collection systems. Yet processing, exploitation, analysis, and reporting technologies and processes are equally valid venues for innovation and must keep pace to support new, innovative forms of collection.
U.S. national security requires an intelligence technology advantage over potential adversaries. In a world facing an uncertain international order and rapid developments in (and uses of) disruptive technologies, the United States clearly faces a real “innovate or die” situation. Yet the risks of “misinnovate and die” could be equally catastrophic. Government leaders should consider patterns of historical misinnovation to ensure new “innovative” technologies aren’t buggy whips or Segways.
Joseph Caddell and Col. Robert Stiegel (USAF, ret.) are faculty members at the National Intelligence University in Bethesda, MD. The views expressed in this article are theirs alone and do not imply endorsement by the Department of Defense or the U.S. government.
Image: Ben Gray/Flickr