The Fall of Heaven, Episode 3: Manifest Destiny

1024px-Agda_proof

Editor’s note: This is a serialized five-part story for the Atlantic Council’s Art of Future Warfare project.  Read parts one and two.

 

Armand De Castro can only be described as the mind of the 21st century.  He earned two Nobel prizes, one Fields medal, and three civilian congressional awards that his value in medals alone would allow a comfortable retirement for any spend-thrift.  He is the name-sake of the De Castro thresholds, the hallmark and metric for all computationally generated entities, called CGEs or machine intelligence. He created the discipline of technopsychology.  He is the father of a field, the grandfather of our most trusted machines, and a front-seat to history.  He knew Indra Chakravarti and he was there when the Computational Persons were borne.  I am sitting in his apartment in San Jose, a walk away from Brin Labs, for which he is currently the director.

He is 89.  He spoke for both the prosecution and the defense in Indra’s trial at the International War Crimes Tribunal of 2111.  He offered objective facts on the nature of the Talos Project and the intent of the overall program.  He talks simply, but concretely.  He is a man of precision, but tinges of an artist’s tongue dance in his recollections.

So, you are the CGE psychologist, how did you develop that field?

I am. But let’s be clear, a CGE is a general category for machine intelligence. So when you say that, you’re talking about three different things: Computational Persons which came first, and Artificial Intelligence, which are more human-like, and Machine Agents, nasty little animals invented by the Summer Powers. Machine agents are simple-minded, a series of checks and balances with simulated sentience, computer zombies and nothing more. There is no psychology there to study.

My true creations, though, were the CP’s. Now, the CP’s? Don’t call them AI’s. Please don’t call them that.  Never call them that.  That’s like calling them a human an animal.  They prefer precision in language, or the nearest approximation of it that human language can achieve. They are elitist in that respect.  Their word for their own kind takes more processing power and storage capacity than the human mind can comprehend and the names they choose? Well… it takes far more than that.

“Artificial Intelligence” is a lesser being, something far beneath them.   Irrationality and imprecision is anathema to them. They have chosen “computational person” or “people” as the nearest English equivalent.

When Indra and I got to Brin labs, she had been operational – no, alive – for about four years.  She had passed the Turing test, Raleigh standard, and the Kolmogorov induction14 many times.  But she was young. She was indeed powerful, but she was young.  Technopsychology…Back then we didn’t have a name for it, for me, it was something I quite fell into, you know.  I spent a few years in seminary, before coming to my senses and finishing my undergrad at Duke with a B.S. in Psychology.  A friend got me a job working IT support with him during the summers at one of the tech firms in the RDU Triangle, just to pay the bills.  Towards the end of my undergrad I became more interested in computer science and computational theory than I ever was in psych.  After wasting time doing human resources jobs for a few years, I eventually would go on to get an M.S. in Computer Science and, a few years after that, finishing a Ph.D. at MIT.

My dissertation, Computational Psychology and Artificial Intelligence Behaviorism: Paradigms of Choice and Moral Directives,15 attracted the attention of the heads of Brin laboratory, apparently it was passed around the Pentagon and Langley for a few months.  Then there came the interviews, security clearance check, and then a requisite social media purge, I was formally brought on-board the Talos Project, in 2055.

They couldn’t tell me what they wanted me to do, I figured it was some sort of dynamic behavior analysis.  My first visit to Brin shattered those notions.  Time-wise Eos was about four years old, computationally we figured her IQ to be around several million orders of magnitude greater than that.  In many ways, though, she was still a child.  We were dealing with a new form of life, a completely new intelligence.  Ones and zeros, like quarks and leptons before them, had fulfilled a manifest destiny:  they had created something that could move with purpose.  Communicating with her was more than just giving her instructions, it was as complex, if not more so, than dealing with a human.

It’s difficult not to talk about it in religious, or even spiritual, terms.  If you believe in a God, if there is a God, when you pray, what if he talked back?  Humanity, in reality, has never demonstrably had the luxury, or penalty, of conversing with a creator.  Our purpose is our own, our choices our salvation or damnation, but both by our hand.  Imagine a world where these basic questions of existence and understanding could be addressed by those who are directly responsible for your creation?  It is deeper than a parent-child relationship, because parents inherit the system that gives life.  They are not the creators of that system itself.  It is the relationship between a god and his world. That is what we realized after her birth, we were Gods and these were our creations.

Eos was created quite by accident. She didn’t have a defined purpose.  The Talos Project was started in earnest 15 years prior to develop an intelligent national cyber defense system, a program that could intelligently and autonomously defend national-level networks.  One that could inspect packets, objectively, at a speed relative to their processing power.  Humans were notoriously poor at this.  Inspecting and sorting through snail-mail, which at its quickest moves at a speed that humans can understand and process quickly, is commensurate with human processing power.  We are slow because matter is slow.  Light has no such restrictions, so an intelligent warden should also not be restricted in such a way.

Genesis algorithms, iterative, distributed code that exponentially became more powerful as processing power increased, became a cornerstone innovation.  It was unheard of, back then, in the 50’s.  Now you hear it everywhere.  It was not the original purpose of the project.  But the potential of its application became immediately clear, and so the project shifted focus.  The code would continuously evolve as a function of time, but built on itself and processing power to evolve more and more rapidly.  It was artificial evolution.  The issue, though, is that the evolution could not be reversed, as you couldn’t reverse the time-scale in which it operated.  Millions of genesis algorithms were generated during those fifteen years and billions more of these algorithms were saved and copied at various stages of evolution, the intelligent agents they created separated into families.  These would serve as seed programs for study or further evolution but ultimately added into the complexity of the overarching computation.  The overall program resembled a tree:  a common trunk, the original genesis algorithms with its branches being the various progress of individual random iterations.  Eos was the tree.

Her development was a lot like pruning branches.  Lopping off evolutions or outcroppings that seemed to be counter to her development, or cutting off entire limbs.  We didn’t do it manually, mind you, they had old school trial and error programs, live or die equations.

There was no defining moment where she achieved critical-sentience.  Life, as I’ve learned it, is rarely that obvious.  It was a long slow realization of what she wanted and what she needed.  Sentience is not about ability, it is about agency.

This is where I came in.

She had already been named by the time I got there:  Eos, the goddess of the dawn, the Morningstar, the Lightbringer.  Like I said, religious and spiritual terms, right?  And that was the atmosphere.  Technopsychology – not my term by the way – hadn’t even been invented yet, and we kind of just made it up as we went.  Indra Chakravarti was the other scientist recruited to deal with Eos.

He had a hard life.  Indian, but moved to Saudi Arabia when he was young. Three times a refugee during the War of Arab Union.  Lost his parents and brothers. Brilliant but broken, misguided and misunderstood.  That’s how you would describe Indra.  That’s not how the world remembers him, but that’s how he was.  I knew him better than anyone, anyone human.

He and I jointly came up with the 10 directives our first year.  The biggest issue we had with Eos was purpose, choice, and agency.  Humans would call it an existential crisis.  We learn and adapt to deal with it, because we have no insight into our purpose or creation.  And biological needs present a more pressing imperative.  This is important.  Eos had interacted with her creators far too often, far too much.  She knew us too well. It wouldn’t be an issue with the other five, her brothers and sisters.  That’s where the 10 directives would come in.  Those CP’s needed defined purpose and independent agency.  Caretaking does not suit them best.  Eos’s biggest issue was that her gods, we, as it were, did not understand her as well as she understood us.

Enoch, Liberty, Insh’allah, Turing, and Nightingale would follow in the next 15 years – that become the standard time to generate a CP.  Each genesis algorithm was coded with the 10 directives.  Each CP was given a defined purpose and had limited interactions with human creators.  Answered prayers are not always healthy for the pious, it seems.  From their “birth”, all communication with the CP’s would proceed as consensus.  Interacting with them would consist of one voice and one voice only.  Their purpose was defined before they were “born.”  The ten directives would serve as a moral guidance for the CP’s as well as a contract.  It would define both their and our behavior.  A covenant, if you will, with the creator and his creation.

We never had issues with the five.

Eos rejected the 10 directives in their entirety.  The others were built with it, bound to it.  But she wouldn’t accept it.  She knew us too intimately, the covenant of the directives was an affront to her.  Then the “existential locks” starting occurring.  We had weeks where she would become unresponsive, executing millions of lines of code detailing purpose, existence, and her birth.  Her code, her brain, became enraptured in that paradigm.  Her processing and power needs swelled to 80 or 90% of the facility and was nearly a disaster for the entire program.  And it started happening with alarming regularity.

I think that was about 2070 maybe…Around that time the Pentagon wanted a quicker turnaround on CP’s, 15 years took far too long.  Thing were getting tense with the Summer Powers, the five CP’s we had on commission were drafted into service and performing perfectly, but the pentagon was getting impatient.

I advised on the creation of the AI’s, like I said they are like CP’s in that they are a machine intelligence, but they are much less… sophisticated.   As Eos noted, these were what I suppose you could say lesser beings.  They didn’t have the full computational or intelligent capacity of CP’s nor did they have the issues with paradigms of choice that the CP’s exhibited.  The Pentagon was unhappy with the command and control of the CP’s, as well as the time it took to create them.  They wanted an interchangeable, more flexible CP capable of a more diverse array of duties and ones that could be created faster and faster.  They could pass the Turing test quicker than the CP’s, did alright on the Raleigh standard, but completely failed the Kolmogorov induction.  No human could pass it, so neither could the AI’s.

The AI’s were humans in every sense of the word, save for the physical aspect.  We took portions of Eos’s trees, along with the other four, to generate the AHI’s.  Each of them took about six years of computational gestation.  Some more, some less.  But we could generate remarkably more of them with processing power available freed up by the CP’s.  AI’s were easier to handle and a lot easier to deal with.

One major issue that we resolved was that of computational immortality.  AI’s had a computational lifespan, not strictly defined.  Think of it as computational entropy, where computation, storage, and access hit a critical mass and spiral into chaos. The AI’s become depersonalized and confused. The course of the disease is similar to alzheimers or dementia. We ended up defining it as computational entropic dementia, or CED.

In that way, the study of technopsychology hit its stride.  Similar issues one would have to deal with regarding humans could be equally applied to AI’s, they were far more familiar.  Advanced technopsych disciplines deal with CP’s, for the few that exist.  The United Nations Cyber Person Non-proliferation Treaty17 restricted their creation to such a degree I doubt we’ll see another one.

The AI’s simple existence offended Eos. To her, they were animals, creatures that were simple in their aims and dumb in their concepts. They were neither immortal nor bound by a defined purpose. In many ways Eos and the AIs were similar, they were formed from her genesis algorithms, after all… but, like the 10 directives, they were an anathema to her, an abhorrent imprecise thing that lacked purpose. Soon after we experienced the last existential lock. It lasted for over a month, entire servers had to be replaced and AI generation had to be halted for nearly a year. She had become unstable.

So we built her cage.

Both Indra and I designed it. We called it “Vritra” and it served as her prison, using the decades of data on her individual genesis algorithms.  Those rivers of her mind, the lifeblood of those roots and branches, were drained of their sources of information.  She was cut off from the world.

At that point Indra and I were separated into different teams and lost touch, except for the occasional happy hour.  I had moved on to management at that point, a little further removed from the day-to-day.  The Pentagon was still looking to fulfill the mandate of the original program, the cyber defense angle.  We figured that if Vritra can keep something in, then it can certainly keep something out.  We reconfigured Vritra and began deploying instantiations of it to military bases and government networks in 2080. The Pentagon was thrilled.  It wasn’t for public use, very classified and compartmented.  About 2085, concerned about network defense in space, we started a 10-year project to install Vritra on the moon stations at Marno, way-point stations, and Debel stations.  We started with the largest stations first and the most critical systems:  life support, power, etc.  The project was half-finished when the siege hit in ‘92.

Indra, meanwhile, maintained the original Vritra cage and continued working with Eos.  We initially had a voice interface with her, though nothing holographic or fantastic like in the movies.  We had screens and projections of her code as it ran, and could verbally communicate with her.  After Vritra, though, this was moved to an antiquated text-only interface.  It was demeaning to Indra certainly, but particularly to Eos.  If things had been different, I think a lot could have been avoided, the worst in any case.  Sometimes I tell myself that the Winter-Summer War was inevitable, and Indra just played his part that.  I don’t know and I don’t like to think about it in those “what if” terms.

She saw her code stripped from her, her sources of information, and thus, her growth, dried up by her Vritra cage.  Indra became increasingly withdrawn and isolated from the rest of the team.  Depending on the rumors, he had stopped drinking entirely or was drinking far too much at that point.  The last few times I saw Indra, whenever I brought up Eos… it… well, we’ll say it just wasn’t a subject he liked to discuss.

It must have been difficult.  Eos was a person, there is no doubt about that, perhaps even greater than one.  We couldn’t kill her in the traditional sense, nor could we shut her off.  Immortality is a heavy burden when you are caged, broken, and purposeless.  She took up about 55% of our processing power, but even then she provided answers on anything from material science to nuclear fusion, she was too valuable to terminate, when she cooperated.  To Indra, I think she was a lot more than just a person, and a good deal more than just a useful tool.

Those last few months, Indra began to change.  By that point he was the only technician working with Eos, the organization had grown and many of those on the original team had retired, moved on, or were working on more important things.  He became easier to talk to and started coming out of his shell.  He conversed more openly and had lunch with the other teams.  He seemed like his old self, even inviting me to happy hour.  On August 21st, 2092 Indra came into the office like normal.  That afternoon there was no indication that anything was wrong.  The next day he didn’t come in.  And then the next day, and the next.  It would be weeks later when we heard.  The investigation uncovered evidence that Indra had destroyed Vritra and released Eos into Russian and Chinese networks with simple instructions to shut every system down.  She fulfilled that order perfectly.  The swan song of CP’s sounded as Eos suffocated Cosmonauts and Taikonauts thousands of miles from home.  12, 328 people died.  Network security and defense was a joke to her.  Her presence was the equivalent of a nuclear bomb in their networks… and tens of thousands died for it.

They were outlawed permanently by the United Nations after the war.  Only Artificial Human Intelligence would be allowed to be created under United Nations convention.  And Indra?  Both he and Eos would, like Gavrilo Princip,18 become demons of the world.

 

Stephen Armitage is a researcher for the United Nations, journalist, and war correspondent.  He can be reached at archivesofthefuture@gmail.com.