May 25, 2013

Humans With Amped Intelligence Could Be More Powerful Than AI

 
With much of our attention focused the rise of advanced artificial intelligence, few consider the potential for radically amplified human intelligence (IA). It’s an open question as to which will come first, but a technologically boosted brain could be just as powerful — and just as dangerous – as AI. 
As a species, we’ve been amplifying our brains for millennia. Or at least we’ve tried to. Looking to overcome our cognitive limitations, humans have employed everything from writing, language, and meditative techniques straight through to today’s nootropics. But none of these compare to what’s in store.
Unlike efforts to develop artificial general intelligence (AGI), or even an artificial superintelligence (SAI), the human brain already presents us with a pre-existing intelligence to work with. Radically extending the abilities of a pre-existing human mind — whether it be through genetics, cybernetics or the integration of external devices — could result in something quite similar to how we envision advanced AI.
Looking to learn more about this, I contacted futurist Michael Anissimov, a blogger at Accelerating Future and a co-organizer of the Singularity Summit. He’s given this subject considerable thought — and warns that we need to be just as wary of IA as we are AI.
Michael, when we speak of Intelligence Amplification, what are we really talking about? Are we looking to create Einsteins? Or is it something significantly more profound?
The real objective of IA is to create super-Einsteins, persons qualitatively smarter than any human being that has ever lived. There will be a number of steps on the way there.
The first step will be to create a direct neural link to information. Think of it as a "telepathic Google."
The next step will be to develop brain-computer interfaces that augment the visual cortex, the best-understood part of the brain. This would boost our spatial visualization and manipulation capabilities. Imagine being able to imagine a complex blueprint with high reliability and detail, or to learn new blueprints quickly. There will also be augmentations that focus on other portions of sensory cortex, like tactile cortex and auditory cortex.
The third step involves the genuine augmentation of pre-frontal cortex. This is the Holy Grail of IA research — enhancing the way we combine perceptual data to form concepts. The end result would be cognitive super-McGyvers, people who perform apparently impossible intellectual feats. For instance, mind controlling other people, beating the stock market, or designing inventions that change the world almost overnight. This seems impossible to us now in the same way that all our modern scientific achievements would have seemed impossible to a stone age human — but the possibility is real.
For it to be otherwise would require that there is some mysterious metaphysical ceiling on qualitative intelligence that miraculously exists at just above the human level. Given that mankind was the first generally intelligent organism to evolve on this planet, that seems highly implausible. We shouldn't expect version one to be the final version, any more than we should have expected the Model T to be the fastest car ever built.
Looking ahead to the next few decades, how could AI come about? Is the human brain really that fungible?
The human brain is not really that fungible. It is the product of more than seven million years of evolutionary optimization and fine-tuning, which is to say that it's already highly optimized given its inherent constraints. Attempts to overclock it usually cause it to break, as demonstrated by the horrific effects of amphetamine addiction.
Chemicals are not targeted enough to produce big gains in human cognitive performance. The evidence for the effectiveness of current "brain-enhancing drugs" is extremely sketchy. To achieve real strides will require brain implants with connections to millions of neurons. This will require millions of tiny electrodes, and a control system to synchronize them all. The current state of the art brain-computer interfaces have around 1,000 connections. So, current devices need to be scaled up by more than 1,000 times to get anywhere interesting. Even if you assume exponential improvement, it will be a while before this is possible — at least 15 to 20 years.
Improvement in IA rests upon progress in nano-manufacturing. Brain-computer interface engineers, like Ed Boyden at MIT, depend upon improvements in manufacturing to build these devices. Manufacturing is the linchpin on which everything else depends. Given that there is very little development of atomically-precise manufacturing technologies, nanoscale self-assembly seems like the most likely route to million-electrode brain-computer interfaces. Nanoscale self-assembly is not atomically precise, but it's precise by the standards of bulk manufacturing and photolithography.
What potential psychological side-effects may emerge from a radically enhanced human? Would they even be considered a human at this point?
One of the most salient side effects would be insanity. The human brain is an extremely fine-tuned and calibrated machine. Most perturbations to this tuning qualify as what we would consider "crazy." There are many different types of insanity, far more than there are types of sanity. From the inside, insanity seems perfectly sane, so we'd probably have a lot of trouble convincing these people they are insane.
Even in the case of perfect sanity, side effects might include seizures, information overload, and possibly feelings of egomania or extreme alienation. Smart people tend to feel comparatively more alienated in the world, and for a being smarter than everyone, the effect would be greatly amplified.
Most very smart people are not jovial and sociable like Richard Feynman. Hemingway said, "An intelligent man is sometimes forced to be drunk to spend time with his fools." What if drunkenness were not enough to instill camaraderie and mutual affection? There could be a clean empathy break that leads to psychopathy.
So which will come first? AI or IA?
It's very difficult to predict either. There is a tremendous bias for wanting IA to come first, because of all the fun movies and video games with intelligence-enhanced protagonists. It's important to recognize that this bias in favor of IA does not in fact influence the actual technological difficulty of the approach. My guess is that AI will come first because development is so much cheaper and cleaner.
Both endeavours are extremely difficult. They may not come to pass until the 2060s, 2070s, or  later. Eventually, however, they must both come to pass — there's nothing magical about intelligence, and the demand for its enhancement is enormous. It would require nothing less than a global totalitarian Luddite dictatorship to hold either back for the long term.
What are the advantages and disadvantages to the two different developmental approaches?
The primary advantage of the AI route is that it is immeasurably cheaper and easier to do research. AI is developed on paper and in code. Most useful IA research, on the other hand, is illegal. Serious IA would require deep neurosurgery and experimental brain implants. These  brain implants may malfunction, causing seizures, insanity, or death. Enhancing human intelligence in a qualitative way is not a matter of popping a few pills — you really need to develop brain implants to get any significant returns.
Most research in that area is heavily regulated and expensive. All animal testing is expensive. Theodore Berger has been working on a hippocampal implant for a number of years — and in 2004 it passed a live tissue test, but there has been very little news since then. Every few years he pops up in the media and says it's just around the corner, but I'm skeptical. Meanwhile, there is a lot of intriguing progress in Artificial Intelligence.
Does IA have the potential to be safer than AI as far as predictability and controllability is concerned? Is it important that we develop IA before super-powerful AGI?
Intelligence Augmentation is much more unpredictable and uncontrollable than AGI has the potential to be. It's actually quite dangerous, in the long term. I recently wrote an article that speculates on global political transformation caused by a large amount of power concentrated in the hands of a small group due to "miracle technologies" like IA or molecular manufacturing. I also coined the term "Maximillian," meaning "the best," to refer to a powerful leader making use of intelligence enhancement technology to put himself in an unassailable position.
Image: The cognitively enhanced Reginald Barclay from the ST:TNG episode, "The Nth Degree." 
The problem with IA is that you are dealing with human beings, and human beings are flawed. People with enhanced intelligence could still have a merely human-level morality, leveraging their vast intellects for hedonistic or even genocidal purposes.
AGI, on the other hand, can be built from the ground up to simply follow a set of intrinsic motivations that are benevolent, stable, and self-reinforcing.
People say, "won't it reject those motivations?" It won't, because those motivations will make up its entire core of values — if it's programmed properly. There will be no "ghost in the machine" to emerge and overthrow its programmed motives. Philosopher Nick Bostrom does an excellent analysis of this in his paper "The Superintelligent Will". The key point is that selfish motivations will not magically emerge if an AI has a goal system that is fundamentally selfless, if the very essence of its being is devoted to preserving that selflessness. Evolution produced self-interested organisms because of evolutionary design constraints, but that doesn't mean we can't code selfless agents de novo.
What roadblocks, be they technological, medical, or ethical, do you see hindering development?
The biggest roadblock is developing the appropriate manufacturing technology. Right now, we aren't even close.
Another roadblock is figuring out what exactly each neuron does, and identifying the exact positions of these neurons in individual people. Again, we're not even close.
Thirdly, we need some way to quickly test extremely fine-grained theories of brain function — what Ed Boyden calls "high throughput circuit screening" of neural circuits. The best way to do this would be to somehow create a human being without consciousness and experiment on them to our heart's content, but I have a feeling that idea might not go over so well with ethics committees.
 Absent that, we'd need an extremely high-resolution simulation of the human brain. Contrary to hype surrounding "brain simulation" projects today, such a high-resolution simulation is not likely to be developed until the 2050-2080 timeframe. An Oxford analysis picks a median date of around 2080. That sounds a bit conservative to me, but in the right ballpark.
This article originally appeared at io9.
Top image: imredesiuk/shutterstock.

How Much Longer Until Humanity Becomes A Hive Mind?


Earlier this year, researchers created an electronic link between the brains of two rats separated by thousands of miles. This was just another reminder that technology will one day make us telepaths. But how far will this transformation go? And how long will it take before humans evolve into a fully-fledged hive mind? We spoke to the experts to find out.
I spoke to three different experts, all of whom have given this subject considerable thought: Kevin Warwick, a British scientist and professor of cybernetics at the University of Reading; Ramez Naam, an American futurist and author of NEXUS (a scifi novel addressing this topic); and Anders Sandberg, a Swedish neuroscientist from the Future of Humanity Institute at the University of Oxford.
They all told me that the possibility of a telepathic noosphere is very real — and it's closer to reality than we might think. And not surprisingly, this would change the very fabric of the human condition. 
Connecting brains
My first question to the group had to do with the technological requirements. How is it, exactly, that we’re going to connect our minds over the Internet, or some future manifestation of it?
“I really think we have sufficient hardware available now — tools like Braingate,” says Warwick. “But we have a lot to learn with regard to how much the brain can adapt, just how many implants would be required, and where they would need to be positioned.”
Naam agrees that we’re largely on our way. He says we already have the basics of sending some sorts of information in and out of the brain. In humans, we’ve done it with video, audio, and motor control. In principle, nothing prevents us from sending that data back and forth between people.
“Practically speaking, though, there are some big things we have to do,” he tells me. “First, we have to increase the bandwidth. The most sophisticated systems we have right now use about 100 electrodes, while the brain has more than 100 billion neurons. If you want to get good fidelity on the stuff you’re beaming back and forth between people, you’re going to want to get on the order of millions of electrodes.”
Naam says we can build the electronics for that easily, but building it in such a way that the brain accepts it is a major challenge.
The second hurdle, he says, is going beyond sensory and motor control.
“If you want to beam speech between people, you can probably tap into that with some extensions of what we’ve already been doing, though it will certainly involve researchers specifically working on decoding that kind of data,” he says. “But if you want to go beyond sending speech and get into full blown sharing of experiences, emotions, memories, or even skills (a la The Matrix), then you’re wandering into unknown territory.”
Indeed, Sandberg says that picking up and translating brain signals will be a tricky matter.
“EEG sensors have lousy resolution — we get an average of millions of neurons, plus electrical noise from muscles and the surroundings,” he says. “Subvocalisation and detecting muscle twitches is easier to do, although they will still be fairly noisy. Internal brain electrodes exist and can get a lot of data from a small region, but this of course requires brain surgery. I am having great hopes for optogenetics and nanofibers for making kinder, gentler implants that are less risky to insert and easier on their tissue surroundings.”
The real problem, he says, is translating signals in a sensible way. “Your brain representation of the concept "mountain" is different from mine, the result not just of different experiences, but also on account of my different neurons. So, if I wanted to activate the mountain concept, I would need to activate a disperse, perhaps very complex network across your brain,” he tells me. “That would require some translation that figured out that I wanted to suggest a mountain, and found which pattern is your mountain.”
Sandberg says we normally "cheat" by learning a convenient code called language, where all the mapping between the code and our neural activations is learned as we grow. We can, of course, learn new codes as adults, and this is rarely a problem — adults already master things like Morse code, SMS abbreviations, or subtle signs of gesture and style. Sandberg points to the recent experiments by Nicolelis connecting brains directly, research which shows that it might be possible to get rodents to learn neural codes. But he says this learning is cumbersome, and we should be able to come up with something simpler.
One way is to boost learning. Some research shows that amphetamine and presumably other learning stimulants can speed up language learning. Recent work on the Nogo Receptor suggests that brain plasticity can be turned on and off. “So maybe we can use this to learn quickly,” says Sandberg.
Another way is to have software do the translation. It is not hard to imagine machine learning to figure out what neural codes or mumbled keywords correspond to which signal — but setting up the training so that users find it acceptably fast is another matter.
“So my guess is that if pairs of people really wanted to ‘get to know each other’ and devoted a lot of time and effort, they could likely learn signals and build translation protocols that would allow a lot of ‘telepathic’ communication — but it would be very specific to them, like the ‘internal language’ some couples have,” says Sandberg. “For the weaker social links, where we do not want to spend months learning how to speak to each other, we would rely on automatically translated signals. A lot of it would be standard things like voice and text, but one could imagine adding supporting ‘subtitles’ showing graphics or activating some neural assemblies.”

Bridging the gap

In terms of the communications backbone, Sandberg believes it’s largely in place, but it will likely have to be extended much further.
“The theoretical bandwidth limitations of even a wireless Internet are far, far beyond the bandwidth limitations of our brains — tens of terabits per second,” he told me, “and there are orbital angular momentum methods that might get far more.”
Take the corpus callosum, for example. It has around 250 million axons, and even at the maximal neural firing rate of just 25 gigabits, that should be enough to keep the hemispheres connected such that we feel we are a single mind.
As for the interface, Warwick says we should stick to implanted multi-electrode arrays. These may someday become wireless, but they’ll have to remain wired until we learn more about the process. Like Sandberg, he adds that we’ll also need to develop adaptive software interfacing.
Naam envisions something laced throughout the brain, coupled with some device that could be worn on the person’s body.
“For the first part, you can imagine a mesh of nano-scale sensors either inserted through a tiny hole in the skull, or somehow through the brain’s blood vessels. In Nexus I imagined a variant on this — tiny nano-particles that are small enough that they can be swallowed and will then cross the blood-brain barrier and find their way to neurons in the brain.”
Realistically, Naam says that whatever we insert in the brain is going to be pretty low energy consumption. The implant, or mesh, or nano-particles could communicate wirelessly, but to boost their signal — and to provide them power — scientists will have to pair them with something the person wears, like a cap, a pair of glasses, a headband — anything that can be worn very near the brain so it can pick up those weak signals and boost them, including signals from the outside world that will be channeled into the brain.

How soon before the hive mind?

Warwick believes that the technologies required to build an early version of the telepathic noosphere are largely in place. All that’s required, he says, is “money on the table” and the proper ethical approval.
Sandberg concurs, saying that we’re already doing it with cellphones. He points to the work of Charles Stross, who suggests that the next generation will never have to be alone, get lost, or forget anything.
“As soon as people have persistent wearable systems that can pick up their speech, I think we can do a crude version,” says Sandberg. “Having a system that’s on all the time will allow us to get a lot of data — and it better be unobtrusive. I would not be surprised to see experiments with Google Glasses before the end of the year, but we’ll probably end up saying it’s just a fancy way of using cellphones.”
At the same time, Sandberg suspects that “real” neural interfacing will take a while, since it needs to be safe, convenient, and have a killer app worth doing. It will also have to compete with existing communications systems and their apps.
Similarly, Naam says we could build a telepathic network in a few years, but with “very, very, low fidelity.” But that low fidelity, he says, would be considerably worse than the quality we get by using phones — or even text or IM. “I doubt anyone who’s currently healthy would want to use it.”
But for a really stable, high bandwidth system in and out of the brain, that could take upwards of 15 to 20 years, which Naam concedes is optimistic.
“In any case, it’s not a huge priority,” he says. “And it’s not one where we’re willing to cut corners today. It’s firmly in the medical sphere, and the first rule there is ‘do no harm’. That means that science is done extremely cautiously, with the priority overwhelmingly — and appropriately — being not to harm the human subject.”

Nearly supernatural

I asked Sandberg how the telepathic noosphere will disrupt the various way humans engage in work and social relations.
“Any enhancement of communication ability is a big deal,” he responded. “We humans are dominant because we are so good at communication and coordination, and any improvement would likely boost that. Just consider flash mobs or how online ARG communities do things that seem nearly supernatural.”
Cell phones, he says, made our schedules flexible in time and space, allowing us to coordinate where to meet on the fly. He says we’re also adding various non-human services like apps and Siri-like agents. “Our communications systems are allowing us to interact not just with each other but with various artificial agents,” he says. Messages can be stored, translated and integrated with other messages.
“If we become telepathic, it means we will have ways of doing the same with concepts, ideas and sensory signals,” says Sandberg. “It is hard to predict just what this will be used for since there are so few limitations. But just consider the possibility of getting instruction and skills via augmented reality and well designed sensory/motor interfaces. A team might help a member perform actions while ‘looking over her shoulder’, as if she knew all they knew. And if the system is general enough, it means that you could in principle get help from any skilled person anywhere in the world.”
In response to the same question, Naam noted that communication boosts can accelerate technical innovation, but more importantly, they can also accelerate the spread of any kind of idea. “And that can be hugely disruptive,” he says.
But in terms of the possibilities, Naam says the sky’s the limit.
“With all of those components, you can imagine people doing all sorts of things with such an interface. You could play games together. You could enter virtual worlds together,” he says. “Designers or architects or artists could imagine designs and share them mentally with others. You could work together on any type of project where you can see or hear what you’re doing. And of course, sex has driven a lot of information technologies forward — with sight, sound, touch, and motor control, you could imagine new forms of virtual sex or virtual pornography.”
Warwick imagines communication in the broadest sense, including the technically-enabled telepathic transmission of feelings, thoughts, ideas, and emotions. “I also think this communication will be far richer when compared to the present pathetic way in which humans communicate.” He suspects that visual information may eventually be possible, but that will take some time to develop. He even imagines the sharing of memories. That may be possible, he says, “but maybe not in my lifetime.”
Put all this together, says Warwick, and “the body becomes redundant.” Moreover, when connected in this way “we will be able to understand each other much more.”

A double-edged sword

We also talked about the potential risks.
“There’s the risk of bugs in hardware or software,” says Naam. “There’s the risk of malware or viruses that infect this. There’s the risk of hackers being able to break into the implants in your head. We’ve already seen hackers demonstrate that they can remotely take over pacemakers and insulin pumps. The same risks exist here.”
But the big societal risk, says Naam, stems entirely from the question of who controls this technology.
“That’s the central question I ask in Nexus,” he says. “If we all have brain implants, you can imagine it driving a very bottom’s up world — another Renaissance, a world where people are free and creating and sharing more new ideas all the time. Or you can imagine it driving a world like that of 1984, where central authorities are the ones in control, and they’re the ones using these direct brain technologies to monitor people, to keep people in line, or even to manipulate people into being who they’re supposed to be. That’s what keeps me up at night.”
Warwick, on the other hand, told me that the “biggest risk is that some idiot — probably a politician or business person — may stop it from going ahead.” He suspects it will lead to a digital divide between those who have and those who do not, but that it’s a natural progression very much in line with evolution to date.
In response to the question of privacy, Sandberg quipped, “Privacy? What privacy?”
Our lives, he says, will reside in the cloud, and on servers owned by various companies that also sell results from them to other organizations.
“Even if you do not use telepathy-like systems, your behaviour and knowledge can likely be inferred from the rich data everybody else provides,” he says. “And the potential for manipulation, surveillance and propaganda are endless.”

Our cloud exoselves

Without a doubt, the telepathic noosphere will alter the human condition in ways we cannot even begin to imagine. The Noosphere will be an extension of our minds. And as David Chalmers and Andy Clark have noted, we should still regard external mental processes as being genuine even though they’re technically happening outside our skulls. Consequently, as Sandberg told me, our devices and “cloud exoselves” will truly be extensions of our minds.
“Potentially very enhancing extensions,” he says, “although unlikely to have much volition of their own.”
Sandberg argues that we shouldn’t want our exoselves to be too independent, since they’re likely to make mistakes in our name. “We will always want to have veto power, a bit like how the conscious level of our minds has veto on motor actions being planned,” he says.
Veto power over our cloud exoselves? The future will be a very strange place, indeed.
This article originally appeared at io9.
Top image: agsandrew/Shutterstock, Nicolesis lab.

May 11, 2013

How Skynet Might Emerge From Simple Physics


provocative new paper is proposing that complex intelligent behavior may emerge from a fundamentally simple physical process. The theory offers novel prescriptions for how to build an AI — but it also explains how a world-dominating superintelligence might come about. We spoke to the lead author to learn more.
In the paper, which now appears in Physical Review Letters, Harvard physicist and computer scientist Dr. Alex Wissner-Gross posits a Maximum Causal Entropy Production Principle — a conjecture that intelligent behavior in general spontaneously emerges from an agent’s effort to ensure its freedom of action in the future. According to this theory, intelligent systems move towards those configurations which maximize their ability to respond and adapt to future changes.

Causal Entropic Forces

It’s an idea that was partially inspired by Raphael Bousso’s Causal Entropic Principle, which suggests that universes which produce a lot of entropy over the course of their lifetimes (i.e., a gradual decline into disorder) tend to have properties, such as the cosmological constant, that are more compatible with the existence of intelligent life as we know it.
“I found Bousso’s results, among others, very suggestive since they hinted that perhaps there was some deeper, more fundamental, relationship between entropy production and intelligence,” Wissner-Gross told me.
The reason that entropy production over the lifetime of the universe seems to correlate with intelligence, he says, may be because intelligence actually emerges directly from a form of entropy production over shorter time spans.
“So the big picture — and the connection with the Anthropic Principle— is that the universe may actually be hinting to us as to how to build intelligences by telling us through the tunings of various cosmological parameters what the physical phenomenology of intelligence is,” he says.
To test this theory, Wissner-Gross, along with his MIT colleague Cameron Freer, created a software engine called Entropica. The software allowed them to simulate a variety of model universes and then apply an artificial pressure to those universes to maximize causal entropy production.
“We call this pressure a Causal Entropic Force — a drive for the system to make as many futures accessible as possible,” he told us. “And what we found was, based on this simple physical process, that we were actually able to successfully reproduce standard intelligence tests and other cognitive behaviors, all without assigning any explicit goals.”
For example, Entropica was able to pass multiple animal intelligence tests, play human games, and even earn money trading stocks. Entropica also spontaneously figured out how to display other complex behaviors like upright balancing, tools use, and social cooperation.
In an earlier version of the upright balancing experiment, which involved an agent on a pogo-stick, Entropica was powerful enough to figure out that, by pushing up and down again repeatedly in a specific manner, it could “break” the simulation. Wissner-Gross likened it to an advanced AI trying to break out of its confinement.
“In some mathematical sense, that could be seen as an early example of an AI trying to break out of a box in order to try to maximize its future freedom of action,” he told us.

The Cognitive Niche

Needless to say, Wissner-Gross’s idea is also connected to biological evolution and the emergence of intelligence. He points to the cognitive niche theory, which suggests that there is an ecological niche in any given dynamic biosphere for an organism that’s able to think quickly and adapt. But this adaptation would have to happen on much faster time scales than normal evolution.
“There’s a certain gap in adaptation space that evolution doesn’t fill, where complex — but computable — environmental changes occur on a time scale too fast for natural evolution to adapt to,” he says, “This so-called cognitive niche is a hole that only intelligent organisms can fill.”
Darwinian evolution in such dynamic environments, he argues, when given enough time, should eventually produce organisms that are capable, through internal strategic modeling of their environment, of adapting on much faster time scales than their own generation times.
Consequently, Wissner-Gross’s results can be seen as providing an explicit demonstration that the cognitive niche theory can inspire intelligent behavior based on pure thermodynamics.

A New Approach to Generating Artificial Superintelligence

As noted, Wissner-Gross’s work has serious implications for AI. And in fact, he says it turns conventional notions of a world-dominating artificial intelligence on its head.

“It has long been implicitly speculated that at some point in the future we will develop an ultrapowerful computer and that it will pass some critical threshold of intelligence, and then after passing that threshold it will suddenly turn megalomaniacal and try to take over the world,” he said.
No doubt, this general assumption has been the premise for a lot of science fiction, ranging from Colossus: The Forbin Project and 2001: A Space Odyssey, through to the Terminator films and The Matrix.
“The conventional storyline,” he says, “has been that we would first build a really intelligent machine, and then it would spontaneously decide to take over the world.”
But one of the key implications of Wissner-Gross’s paper is that this long-held assumption may be completely backwards — that the process of trying to take over the world may actually be a more fundamental precursor to intelligence, and not vice versa.
“We may have gotten the order of dependence all wrong,” he argues. “Intelligence and superintelligence may actually emerge from the effort of trying to take control of the world — and specifically, all possible futures — rather than taking control of the world being a behavior that spontaneously emerges from having superhuman machine intelligence.”
Instead, says Wissner-Gross, from the rather simple thermodynamic process of trying to seize control of as many potential future histories as possible, intelligent behavior may fall out immediately.

Seizing Future Histories

Indeed, the idea that intelligent behavior emerges as an effort to keep future options open is an intriguing one. I asked Wissner-Gross to elaborate on this point.
“Think of games like chess or Go,” he said, “in which good players try to preserve as much freedom of action as possible.”
The game of Go in particular, he says, is an excellent case study.
“When the best computer programs play Go, they rely on a principle in which the best move is the one which preserves the greatest fraction of possible wins,” he says. “When computers are equipped with this simple strategy — along with some pruning for efficiency — they begin to approach the level of Go grandmasters.” And they do this by sampling possible future paths.
A fan of Frank Herbert’s Dune series, Wissner-Gross drew another analogy for me, but this time to the character of Paul Atreides who, after ingesting the spice melange and becoming the Kwisatz Haderach, could see all possible futures and hence choose from them, enabling him to become a galactic god.
Moreover, the series’ theme of humanity learning the importance of not allowing itself to become beholden to a single controlling interest by keeping its futures as open as possible resonates deeply with Wissner-Gross’ new theory.

Recursive Self-Improvement

Returning to the issue of superintelligent AI, I asked Wissner-Gross about the frightening prospect of recursive self-improvement — the notion that a self-scripting AI could iteratively and unilaterally decide to continually improve upon itself. He believes the prospect is possible, and that it would be consistent with his theory.
“The recursive self-improving of an AI can be seen as implicitly inducing a flow over the entire space of possible AI programs,” he says. “In that context, if you look at that flow over AI program space, it is conceivable that causal entropy maximization might represent a fixed point and that a recursively self-improving AI will tend to self-modify so as to do a better and better job of maximizing its future possibilities.”

Is Causal Entropy Maximization Friendly?

So how friendly would an artificial superintelligence that maximizes causal entropy be?
“Good question,” he responded, “we don’t yet have a universal answer to that.” But he suggests that the financial industry may provide some clues.
“Quantitative finance is an interesting model for the friendliness question because, in a volume sense, it has already been turned over to (specialized) superhuman intelligences,” he told me. Wissner-Gross previously discussed issues surrounding financial AI in a talk he gave at the 2011 Singularity Summit.
Now that these advanced systems exist, they’ve been observed to compete with each other for scarce resources, and — especially at high frequencies — they appear to have become somewhat apathetic to human economies. They’ve decoupled themselves from the human economy because events that happen on slower human time scales — what might be called market “fundamentals” — have little to no relevance to their own success.
But Wissner-Gross cautioned that zero-sum competition between artificial agents is not inevitable, and that it depends on the details of the system.
“In the problem solving example, I show that cooperation can emerge as a means for the systems to maximize their causal entropy, so it doesn’t always have to be competition,” he says. “If more future possibilities are gained through cooperation rather than competition, then cooperation by itself should spontaneously emerge, speaking to the potential for friendliness.”

Attempting to Contain AIs

We also discussed the so-called boxing problem — the fear that we won’t be able to contain an AI once it gets smart enough. Wissner-Gross argues that the problem of boxing may actually turn out to be much more fundamental to AI than it has been previously assumed.
“Our causal entropy maximization theory predicts that AIs may be fundamentally antithetical to being boxed,” he says. “If intelligence is a phenomenon that spontaneously emerges through causal entropy maximization, then it might mean that you could effectively reframe the entire definition of Artificial General Intelligence to be a physical effect resulting from a process that tries to avoid being boxed.”
Which is quite frightening when you think about it.
Read the entire paper: A. D. Wissner-Gross, et al., “Causal Entropic Forces,” Physical Review Letters 110, 168702 (2013).