February 21, 2014
February 17, 2014
We're still decades — if not centuries — away from being able to transfer a mind to a supercomputer. It's a fantastic future prospect that makes some people incredibly squeamish. But there are considerable benefits to living a digital life. Here's why you should seriously consider uploading.
As I've pointed out before, uploading is not a given; there are many conceptual, technological, ethical, and security issues to overcome. But for the purposes of this Explainer, we're going to assume that uploads, or digital mind transfers, will eventually be possible — whether it be from the scanning and mapping of a brain, serial brain sectioning, brain imaging, or some unknown process.
Indeed, it's a prospect that's worth talking about. Many credible scientists, philosophers, and futurists believe there's nothing inherently intractable about the process. The human brain — an apparent substrate independent Turing Machine — adheres to the laws of physics in a material universe. Eventually, we'll be able to create a model of it using non-biological stuff — and even convert, or transfer, existing analog brains to digital ones.
So, assuming you'll live long enough to see it — and muster up the courage to make the paradigmatic leap from meatspace to cyberspace — here's what you have to look forward to:
An End to Basic Biological Functions
Once you're living as a stream of 1's and 0's you'll never have to worry about body odor, going to the bathroom, or having to brush your teeth. You won't need to sleep or have sex — unless, of course, you program yourself such that you'll both want and need to do these things (call it a purist aesthetic choice).
At the same time, you won't have to worry about rising cholesterol levels, age-related disorders, and broken bones. But you will have to worry about viruses (though they'll be of a radically different sort), hackers, and unhindered access to processing power.
Radically Extended Life
The end of an organic, biological human life will offer the potential for an indefinitely long one. For many, virtual immortality will be the primary appeal of uploading. So long as the supercomputer in which you reside is secure and safe (e.g. planning an exodus from the solar system when the Sun enters into its death throes), you should be able to live until the universe collapses in the Big Rip — something that shouldn't happen for another 22 billion years.
Creating Backup Copies
I spoke to futurist John Smart about this one. He's someone who's actually encouraging the development of technologies required for brain preservation and uplift. To that end, he's the Vice President of the Brain Preservation Foundation, a not-for-profit research group working to evaluate — and award — a number of scanning a preservation strategies.
Smart says it's a good idea to create an upload as a backup for your bioself while you're still alive.
"We are really underthinking the value of this," he told io9. "With molecular-scale MRI, which may be possible for large tissue samples in a few decades, and works today for a few cubic nanometers, people may do nondestructive self-scanning (uploading) of their brains while they are alive, mid- to late-21st century."
Smart says that if he had such a backup on file, he would be far more zen about his own biological death.
"I could see whole new philosophical movements opening up around this," he says. "Would you run your upload as an advisor/twin while you are alive? Or just keep him as your backup, to boot up whenever you choose to leave biolife, for whatever personal reasons? I think people will want both choices, and both options will be regularly chosen."
Making Virtually Unlimited Copies of Yourself
Related to the previous idea, we could also create an entire armada of ourselves for any number of purposes.
"The ability to make arbitrary numbers of copies of yourself, to work on tough problems, or try out different personal life choice points, and to reintegrate them later, or not, as you prefer, will be a great new freedom of uploads," says Smart. "This happens already when we argue with ourselves. We are running multiple mindset copies — and we must be careful with that, as it can sometimes lead to dissociative personality disorder when combined with big traumas — but in general, multiple mindsets for people, and multiple instances of self, will probably be a great new capability and freedom."
Smart points to the fictional example of Jamie Madrox, aka Multiple Man, the comic book superhero who can create, and later reabsorb, "dupes" of himself, with all their memories and experiences.
Dramatically Increased Clock Speed
Aside from indefinite lifespans, this may be one of the sweetest aspects of uploading. Living in a supercomputer would be like Neo operating in Bullet Time or small animals who perceive the world in slow-motion relative to humans. Living in a supercomputer, we could do more thinking, get more done, and experience more compared to wetware organisms functioning in "real time." And best of all, this will significantly increase the amount of relative time we can have in the Universe before it comes to grinding halt.
"I think the potential for increased clock speeds is the central reason why uploads are the next natural step for leading edge intelligence on Earth," says Smart. "We seem to be rushing headlong to virtual and physical "inner space."
Radically Reduced Global Footprints
Uploading is also environmentally friendly, something that could help us address our perpetually growing population — especially in consideration of radical life extension at the biological level. In fact, transferring our minds to digital substrate may actually be a matter of necessity. Sure, we'll need powerful supercomputers to run the billions — if not trillions — of individual digital experiences, but the relatively low power requirements and reduced levels of fossil fuel emissions simply can't compare to the burden we impose on the planet with our corporeal civilization.
It'll also be easier to enhance our intelligence when we're purely digital. Trying to boost the cognitive power of a biological brain is prohibitively difficult and dangerous. A digital mind, on the other hand, would be flexible, robust, and easy to repair. Augmented virtual minds could have higher IQ-type intelligence, enhanced memory, and increased attention spans. We'll need to be very careful about going down this path, however, as it could lead to an out-of-control transcending upload — or even insanity.
Uploads will also enable us to engineer and assume any number of alternative psychological modalities. Human experience is currently dominated by the evolutionary default we call neurotypicality, though outliers exist along the autistic spectrum and other so-called psychological "disorders." These customized cognitive processing frameworks will allow uploaded individuals to selectively alter the specific and unique ways in which they absorb, analyze, and perceive the world, allowing for variation in subjectivity, social engagement, aesthetics, and biases. These frameworks could also be changed on the fly, allowing uploads to change their frameworks depending on the context. Or just to try it out and feel like another person.
Enhanced Emotion Control
Somewhat related to the last one, uploaded individuals will also be able to monitor, regulate, and choose the state of their subjective well-being and emotional state, including levels of happiness.
Uploads could default to the normal spectrum of human emotion, or choose to operate within a predefined band of emotional variability — including, more conceptually, the introduction of new emotions altogether. Safety mechanisms could be built-in to prevent a person from spiraling into a state of debilitating depression — or a state of perpetual bliss, unless that's precisely what the upload is seeking.
A Better Hive Mind
The ability to link biological minds to create a kind of technologically-enabled telepathy, or techlepathy, is probably possible. But as I've pointed out before, it'll be exceptionally difficult and messy. A fundamental problem will be to translate signals, or thoughts, in a sensible way such that each person in the link-up has the same mental representation for a given object or concept. This translation problem could be overcome by developing standard brain-to-brain communication protocols, or by developing innate translation software. And of course, because all the minds are in the same computer, establishing communication links will be a breeze.
Toying WIth Alternative Physics
Quite obviously, uploads will be able to live in any number of virtual reality environments. These digital worlds will be like souped-up and fully immersive versions of Second Life or World of Warcraft. But why limit ourselves to the physics of the Known Universe when we can tweak it any number of ways? Uploads could add or take away physical dimensions, lower the effect of gravity, increase the speed of light, and alter the effects of electromagnetism. All bets are off in terms of what's possible and the kind of experiences that could be had. By comparison, life in the analog world will seem painfully limited and constrained.
Downloading to an External Body
Now, just because you've uploaded yourself to a supercomputer doesn't mean you have to stay there. Individuals will always have the option of downloading themselves into a robotic or cyborg body, even if it's just temporary. But as portrayed in Greg Egan's scifi classic, Diaspora, these ventures outside the home supercomputer will come with a major drawback — one that's closely tied to the clock speed issue: Every moment a person spends in the real, analog world will be equivalent to months or even years in the virtual world. Subsequently, you'll need to be careful about how much time you spend off the grid.
Interstellar Space Travel
As futurist Giulio Prisco has noted, it probably makes most sense to send uploaded astronauts on interstellar missions. He writes:
The very high cost of a crewed space mission comes from the need to ensure the survival and safety of the humans on board and the need to travel at extremely high speeds to ensure it's done within a human lifetime. One way to overcome that is to do without the wetware bodies of the crew, and send only their minds to the stars - their "software" — uploaded to advanced circuitry, augmented by AI subsystems in the starship's processing system...An e-crew — a crew of human uploads implemented in solid-state electronic circuitry — will not require air, water, food, medical care, or radiation shielding, and may be able to withstand extreme acceleration. So the size and weight of the starship will be dramatically reduced.
Tron Legacy concept art by David Levy.
This article originally appeared at io9.
At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? We spoke to an expert to find out.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.
Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only effects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one every fears it might. That's how I feel watching the movie Her.
We have yet to discover any signs of an extraterrestrial civilization — a prospect that could quite literally change overnight. Should that happen, our sense of ourselves and our place in the cosmos would forever be shaken. It could even change the course of human history. Or would it?
Top image: Josh Kao; more about this artist here.
Last week, SETI's Seth Shostak made the claim that we'll detect an alien civilization by 2040. Personally, I don't believe this will happen (for reasons I can elucidate in a future post — but the Fermi Paradox is definitely a factor, as is the problem of receiving coherent radio signals across stellar distances). But it got me wondering: What, if anything, would change in the trajectory of a civilization's development if it had definitive proof that intelligent extraterrestrials (ETIs) were real?
Finding a World Much Like Our Own
As I thought about this, I assumed a scenario with three basic elements.
First, that humanity would make this historic discovery within the next several years or so. Second, that we wouldn't actually make contact with the other civilization (just the receipt, say, of a radio transmission — something like a Lucy Signal that would cue us to their existence). And third, that the ETI in question would be at roughly the same level of technological development as our own (so they're not too much more advanced than we are; that said, if the signal came from an extreme distance, like hundreds or thousands of light-years away, these aliens would probably have advanced appreciably by now. Or they could be gone altogether, the victims of a self-inflicted disaster).
I tossed this question over to my friend and colleague Milan Cirkovic. He's a Senior Research Associate at the Astronomical Observatory of Belgrade and a leading expert on SETI.
"Well, that's a very practical question, isn't it?" he responded. "Because people have been expecting something like this since 1960 when SETI was first launched — they haven't really been expecting to find billion-year old supercivilizations or just some stupid bacteria."
Indeed, the underlying philosophy of SETI over the course of its 50-year history has been that we'll likely detect a civilization roughly equal to our own — for better or worse. And no doubt, in retrospect it started to look "for worse" when the hopes of an early success were dashed. Frank Drake and his colleagues thought they would find signs of ETIs fairly quickly, but that turned out not to be the case (though Drake's echo can still be heard in the unwarranted contact optimism of Seth Shostak).
"Some people argued that a simple signal wouldn't mean much for humanity," added Cirkovic, "but I think Carl Sagan, as usual, had a good response to this."
Specifically, Sagan said that the very understanding that we are not unique in the universe would have enormous implications for all those fields in which anthropocentrism reigns supreme.
"Which means, I guess, half of all the sciences and about 99% of the other, non-scientific discourse," said Cirkovic.
Sagan also believed that the detection of a signal would reignite enthusiasm for space in general, both in terms of research and eventually the colonization of space.
"The latter point was quite prescient, actually, because at the time he said this there wasn't much enthusiasm about it and it was much less visible and obvious than it is today," he added.
No doubt — this would likely generate tremendous excitement and enthusiasm for space exploration. In addition to expanding ourselves into space, there would be added impetus to reach out and meet them.
At the same time, however, some here on Earth might counterargue that we should stay home and hide from potentially dangerous civilizations (ah, but what if everybody did this?). Ironically, some might even argue that we should significantly ramp-up our space and military technologies to meet potential alien threats.
In response to my query about the detection of ETIs affecting the developmental trajectory of civilizations, Cirkovic replied that both of Sagan's points can be generalized to any civilization at their early stages of development.
He believes that overcoming speciesist biases, along with a constant interest and interaction with the cosmic environment, must be desirable for any (even remotely) rational actors anywhere. But Cirkovic says there may be exceptions — like species who emerge from radically different environments, say, the atmospheres of Jovian planets. Such species would likely have a lack of interest in surrounding space, which would be invisible to them practically 99% of the time.
So if Sagan is correct, detecting an alien civilization at this point in our history would likely be a good thing. In addition to fostering science and technological development, it would motivate us to explore and colonize space. And who knows, it could even instigate significant cultural and political changes (including the advent of political parties both in support of and in opposition to all this). It could even lead to new religions, or eliminate them altogether.
Another possibility is that nothing would change. Life on Earth would go on as per usual as people work to pay their bills and keep a roof above their heads. There could be a kind of detachment to the whole thing, leading to a certain ambivalence.
At the same time however, it could lead to hysteria and paranoia. Even worse, and in twisted irony, the detection of a civilization equal to our own (or any life less advanced than us, for that matter) could be used to fuel the Great Filter Hypothesis of the Fermi Paradox. According to Oxford's Nick Bostrom, this would be a strong indication that doom awaits us in the (likely) near future — a filter that affects all civilizations at or near our current technological stage. The reason, says Bostrom, is that in the absence of a Great Filter, the galaxy should be teeming with super-advanced ETIs by now. Which it's clearly not.
Yikes. Stupid Fermi Paradox — always getting in the way of our future plans.
This article originally appeared at io9.