A Very British AI Revolution in Intelligence is Needed

AIdear

Artificial intelligence (AI) and automation will make large numbers of intelligence staff the world over increasingly redundant. This means that people like me, an intelligence officer in the Royal Air Force, may eventually be out of a job — or at least see our jobs radically re-imagined. My view? The sooner the better.

As intelligence professionals, we should collude in the effort to replace ourselves, racing to reduce our numbers, get off the stage, and start the handover now. The aim should be to gradually replace personnel with algorithms to the furthest extent possible to maximize the effectiveness and efficiency of intelligence. If we try to hold back the coming tidal wave of artificial intelligence and automation, the data deluge will soon swamp us, before more forward-thinking adversaries harness its power and sweep us away.

To overcome institutional inertia and galvanize intelligence personnel to act, the U.K. Ministry of Defence should commit to automating 60 percent of its intelligence collection, processing, and analytical capability by 2025. This target might seem high, but in reality it undershoots industry estimates of what is currently possible.

There are four reasons for the Ministry of Defence to act now. If coping with the data deluge is the first, the second is the unprecedented opportunity that AI and automation offer in enhancing intelligence capabilities by overcoming human cognitive, attentional, and physical limitations. Third, personnel costs in NATO member states are the principal reason why they fall so far short of matching many potential adversaries in scale (all of which are in lower-wage economies). Automation offers a chance to multiply Western forces in unprecedented ways, allowing them to redress these imbalances. Relatedly, AI and automation will also help to internally rebalance lopsided tooth-to-tail ratios. Fourth, losing ground to potential adversaries that are already investing in this space risks losing a decisive advantage that cannot be recovered.

Drowning in the Data Deluge

Today, the intelligence officer’s effort to be on top of all the relevant information is an impossible task. There is vastly more to read than intelligence personnel can effectively filter, let alone process. The data deluge is real and pressing — as retired Cmdr. Ted Johnson and retired Gen. Charles Ward recently explained, the U.S. Air Force produced some 37 years of full motion footage in 2011 alone. This is too much for the human resources of the U.S. military. Consequently, much of the collect goes unanalyzed. 60 percent of it, they report, contains nothing of value, so reviewing it wastes analysts’ time.  On top of the challenges of unanalyzed data and wasted analysts’ time, we can add cognitive biases and human error, as well as perhaps the greatest challenge of the data deluge: the risk that decision-making drowns in data. Indeed, one senior U.S. commander cited ‘information overload’ for failings that led to a drone strike that caused civilian casualties in 2010. Automating analysis is essential for dealing with the ever-increasing volume of data collected by defense surveillance assets alone, it is even more so when we consider the quantities that the ‘global digital society’ is producing — estimated to be at least 5200 GB for every person on earth by 2020 – roughly equating to 18.5 million books of information on each and every single person on the planet. Intelligence staff can’t analyze that volume of data. U.K. Defence must either automate or lose to those who do.

Unprecedented Opportunities

Even without the challenges of big data, there should be no disagreement over the need to replace first-line analysis with algorithms. We need only briefly review the range of domains in which specialized AI is already outperforming humans to see its utility. AI is consistently proving humans wrong, outperforming expectations and overcoming challenges thought impossible for a machine to achieve. First chess, now more complex games such as Go and poker. In some domains, AI is undertaking medical diagnosis — such as in predicting heart attacks and in the early diagnosing of amyotrophic lateral sclerosis (ALS/motor neuron disease), both with greater accuracy than human doctors. Even deep skeptics of AI in medicine concede it offers value and advantage now and will play a more important role in the future. AI can be creative — it’s composing music in the style of the great composers and writing novel and new compositions experts can’t tell apart from those composed by humans. It is mimicking and perhaps surpassing the work of great art and artists, providing psychotherapy (and may now be approaching levels of emotional intelligence that surpass ours in some respects), and rooting itself in our cultures and history to understand us better. Facial recognition AI have been able to detect people’s politics, IQ , and sexuality. Specialized AI has proven itself more effective at probabilistic reasoning and is beginning to out-compete us in multiple domains.

Richard and Daniel Susskind, in their seminal book on AI and the future of work, The Future of the Professions (2015), described how in 2014 Associated Press began using algorithms to deliver company earnings reports, formerly the job of trained human professionals. The adoption of the algorithms improved the speed, the accuracy, and the volume in which such reports could be produced — 15 times as many by the algorithms as by the humans. The employment of AI in report writing is growing ever more widespread: Forbes automates production of earnings reports, the Press Association is replacing shuttered local news outlets in the United Kingdom with AI-generated local news (check out their prototype urbs.london), Associated Press and the L.A. Times use AI to write sports and earthquake reports; and The Washington Post publishes political articles written using Helograf’s algorithms — writing explanatory articles in narrative form optimized for human consumption and targeting specific groups with a specific editorial voice. Surveys have shown that humans are rarely able to differentiate what has been written by a newsbot and what is the work of a human writer. AI may soon be able to write stories humans can’t — drawing together more disparate ‘who, what, why, when’ stories from events and trends in far apart places to help us see interconnections and parallels and patterns we’d otherwise have missed. The Guardian suggests that an AI might one day win a Pulitzer. In short, intelligence officers can’t hope to outperform the speed, volume and, in some cases — particularly with numerical data, the accuracy of reports that AI, or newsbots, can produce.

Against the background of AI’s achievements, it’s a scandal that routine imagery analysis — ‘counting holes in runways’ after they’ve been bombed, as a colleague recently put it — is still performed by a human. If it involves pattern recognition, humans should be enabling AI to do it faster, more accurately, and more consistently than they can, with humans there to review the task and correct errors until the AI can outperform them.

Higher-level analytical functions are being challenged across the board. AI may be set to replace or reduce the role of management consultants and accountants, the latter now employing ‘continuous accounting’ where AI reads everything — all data —– from multiple sources, identifies patterns and anomalies, and provides a far more accurate analysis than would have been possible if a human had analyzed subset data and drawn inferences from it. PwC are able to help “companies map buyer personas, simulate ‘future selves’ and anticipate customer behavior. AI has enabled these financial services companies to validate real-time business decisions within seconds.” AI can mine opinions and sentiment from social media, map social networks, and, with increasing accuracy, predict behaviors, albeit probabilistically rather than deterministically — but that is all the human analyst can aspire to do also. AI is outperforming doctors in the diagnosis of breast cancer — reducing the detection of false negatives by some 39 percent. Turning to the Susskinds again, we need only note that a new medical paper was published every 41 seconds in 2014, a rate that has likely increased since. By their calculations, if only 2 percent of that literature was relevant for a given doctor in a particular field, it would take that doctor 21 hours each day, every day, to read all of it. And they have it easy. Researchers trying to understand proteins relevant to the suppression of tumors would need 38 years to read the 70,000 articles relevant to their area of research. There are some 50 million more articles that have a bearing on the subject and should be read, even if they are slightly tangential — and that number doubles every three years. AI developed by IBM was able to read all of these, and its suggestions led to the discovery of dozens of new insights at unprecedented rates. Imagine the complex interactions in human systems, the vast literature — let alone intelligence reporting — on, for example, the Middle East. Most intelligence analysts go from zero-to-hero on deployment, from knowing nothing to being an established expert. I recall my own confusion when, in Basra in 2005, I encountered in reports two insurgents with the same name in different areas of the city — AI could have quickly disambiguated, geo-located, and provided a biography on each from available reporting. It took me weeks to realize that they weren’t the same person. A good friend, a thought-leader in both academia and the military, talks about the five-book rule — read five books on a given subject and you’re almost guaranteed to have the most expertise on it in any government or military meeting (and a great many in academia), a truism that has served me well. But five books are nothing compared to what an AI could ‘read’ on any given subject. It seems certain to replace humans even in the domains where we feel most dominant.

Humans are Too Expensive

Artificial intelligence and automation have terrific potential, particularly for European militaries, where McKinsey reports  that some 50 percent of expenditure is on personnel costs. In real terms, the United Kingdom’s defense spending stood at about $52 billion in 2016, compared with Russia’s at $58.9 billion. From these figures, the United Kingdom gets 152,000 active personnel and 81,000 reserves, while Russia gets more than 831,000 active personnel, 2 million reservists, and 659,000 active paramilitary forces. Personnel overheads also reduce money available to spend on front-line equipment, which accounts in part for the vastly disproportionate difference between Russia’s fielded forces and those of the United Kingdom when compared with the relatively small gap in real-terms defense spending. Of course, there is an argument here for automation to replace manpower in frontline forces once mass no longer requires expensive humans, but regardless AI can dramatically cut debilitating Western tooth-to-tail ratios, allowing defense spending to produce a far more capable force.

Before You Object…

It’s easy to doubt the predictions of AI’s future dominance, but consider this: When Oxford University Professor of Philosophy Nick Bostrom wrote his landmark guide to AI, Superintelligence (2014), his survey of experts across the field, he found that opinions varied significantly for when we would first develop human-level machine intelligence, but the median responses suggested there was a 10 percent probability it would happen as early as 2022, 50 percent by 2040, and 90 percent by 2075. Few dispute that we will get there. The debate is over how long it will take. More notably, estimates of when we would create superintelligence — intelligence exceeding the cognitive limitations of our brains — were less varied: 10 percent thought this would be within two years of creating human-level machine intelligence, 75 percent thought it would follow within 30 years. Either way, what we’re debating is not whether machines will one day match human intelligence, nor whether they will one day exceed it, but when. And some estimates predict that point to be as soon as 2022.

There will be other objections in the military mind. The Susskinds note that following their presentations on AI, they are often approached by audience members keen to express their agreement that AI will replace many professional functions in all fields except one: their own. Lawyers, for example, are quick to argue for a shake-up in health and education, but see less need for AI in a legal role. They also note a common psychological defense of special pleading by hard cases; i.e. a professional claims that x or y function, the most difficult in their fields, could never be automated, using this claim as an excuse to overlook the myriad ways in which AI indisputably could make a huge difference right now in completing tasks for which they are currently responsible. Another objection is what the Susskinds call ‘irrational rejectionism’ — the dogmatic dismissal of a system, which the individual has no personal experience or detailed knowledge of, yet against which they raise supposedly unsurpassable barriers. Difficulties are noted and applications discarded on the basis of their objections, without further thought or investigation. Finally, the Susskinds point to another tactic that enables professionals to reject the notion that much of what they do could be automated, one likely to be raised (with some justification) across Britain’s  Ministry of Defence — ‘technological myopia’ in the Susskinds’ terminology, the tendency to underestimate tomorrow’s technologies by pointing to the failures and shortcomings of today’s IT. When yet again you can’t log on to MODNet — the Ministry of Defence’s intranet — it’s hard to be convinced that computers might one day replace you. But it isn’t hard to think beyond these limitations. Industry, banking, science, and commerce are showing the way, even as many professions resist. Exposing the irrational objections that have deleteriously delayed the adoption of capability-enhancing technologies in other professions might perhaps smooth the road to more widespread acceptance of what’s coming.

The challenge to the intelligence profession is similar to that facing all professions. In the Susskinds’ memorable phrase, intelligence personnel, like many professionals, need to move from being, “the sage on the stage’ to ‘the guide on the side.” The Ministry of Defence needs to begin automating now, training people in statistics and machine learning to help guide and shape the development of the algorithms future warfare will need. There will still be a need for knowledge of the capabilities, tactics, politics, culture, language, social, and psychological dynamics of any given population or situation to help guide and refine the algorithms — at least for now. But the emphasis will be on a smaller number of deep experts overseeing analytical teams of intelbots and working with coders and statisticians. It isn’t just intelligence staff that will be involved in setting up this effort. Clearly, there will be a key role for communications engineers and many others during the shift to an ever-greater reliance on automation. It’s also not just intelligence roles that might be eased, reduced, or replaced by automation. Logistic and administrative positions are clearly in at least a similar position. And senior commanders can’t rest easy either — military judgment (defined as the accumulation of experience (and biases) collected throughout their careers) will be subject to scrutiny and war-gaming, assumption checks, and testing by algorithms that will know previous conflicts and history in far more detail than any human possibly could. They too may be reduced to being ‘on-the-loop’ — watching and intervening only if something goes wrong or needs refining, rather than ‘in-the-loop,’ slowing things down with human-level processing and decision-making. Some, in a final defense of the human’s indispensability, reach for the arguments from centaur chess, where a human–machine team was able to beat a machine at chess for years after it was clear humans alone would almost always lose to a computer. But the balance of power seems to be shifting again, with humans now the limiting factor, introducing errors that make the centaur team worse than the unencumbered computer.

The data deluge requires all nations to act, and AI’s ability to enhance capabilities ought to incentivize them to do so, but the risk of falling behind potential adversaries ought to compel nations to move fast. It can’t be often that Vladimir Putin, Elon Musk, and Stephen Hawking have all agreed with each other. But, in 2017, all were clear on the opportunities and threats AI presents. In Putin’s words “Whoever becomes the leader in this sphere will become the ruler of the world.” China aims to be the world leader in AI by 2030 and already employs AI in law enforcement. The United Kingdom has more to gain from getting ahead in the AI race than most, and every bit as much to lose if it lags behind.

Four things can be said with certainty, given the evidence presented. Firstly, much intelligence (and other) work could be done more efficiently and effectively by specialized AI now, freeing analysts for other tasks or reducing the number of analysts required (and therefore also the number of commanders, support staff etc.). Secondly, AI will only become more capable over time, increasing the number and complexity of tasks and roles it can fulfil. Thirdly, AI may approach, match or exceed human-level intelligence within our lifetimes. Fourth, potential adversaries are already pursuing the integration of AI within their military capabilities.

In Need of a Target

To keep up with potential adversaries and maximize efficiency and effectiveness, we need an ambitious target – a statement of intent to galvanize British military intelligence into embracing the future, prioritizing the important (preparing for the age of automation and growing capability) over the urgent demands of current operations against under-funding and under-staffing.  McKinsey estimates that, overall, some 49 percent of professions can be automated now, while some 64 percent of data collection and 69 percent of data processing tasks could also be handed to AI simply by ‘adapting currently demonstrated technology.’ A target of automating 60 percent of collection and processing and current intelligence updates by 2025 would be impressive yet achievable, galvanizing both our men and women in service and our partners in industry to help us achieve this critical goal, with a clear focus from recruitment through training to delivery.

There are some in the Ministry of Defence who have considered earlier drafts of this article controversial. It should not be. The argument within is entirely aligned with direction from the most senior levels of the ministry. For example, in Nov. 2017, Gen. Sir Gordon Messenger, the United Kingdom’s Vice-Chief of the Defence Staff, reportedly stated, while speaking at TechUK Dinner in London, that “in future conflict, the winner will be the one who turns data into information dominance quicker than our adversary” Similarly, U.K. Secretary of State for Defence Gavin Williams told Parliament, in January of this year, that the ministry’s plans were about

Taking our intelligence, surveillance and reconnaissance capability to the next level, hoovering up information from beneath the waves, from space, from across the increasingly important electro-magnetic spectrum finding out what our enemies are doing in high-definition and providing artificial intelligence — enabling analysis that can stay ahead in a fast-moving world.

There is senior recognition that if we don’t move first, our adversaries will. We can  lead in developing and integrating these systems, or be outmaneuvered by those who do. Best we start sharpening our edge now.

 

Wing Cmdr. Keith Dear is a serving Royal Air Force intelligence officer and a DPhil Candidate at the University of Oxford’s Department of Experimental Psychology & Neuroscience. He co-leads the Defence Entrepreneur’s Forum in the UK, is the founder of Airbridge Aviation a not-for-profit dedicated to bringing cargo drones to humanitarian emergencies and is both a co-producer of and an expert panelist on the Royal United Service Institute-Atlantic Council’s “Staging the Future: Artificial Intelligence & Conflict” collaboration. Follow him on Twitter @WgCdr_K

Image: Arcs Electronica/Flickr

Do you like our articles?

Then you'll love this job opportunity! War on the Rocks is hiring another full-time editor. Help produce the articles you love to read.