February 21, 2014

Bioengineered monkeys with human genetic diseases have almost arrived — and that's awful

Looking to create more accurate experimental models for human diseases, biologists have created transgenic monkeys with "customized" mutations. It's considered a breakthrough in the effort to produce more human-like monkeys — but the ethics of all this are dubious at best.
Yup, scientists know that mice models suck. Though they're used in nearly 60% of all experiments, they're among the most unreliable test subjects when it comes to approximating human biological processes (what the hell is an autistic mouse, anyway)?
Great apes, like chimpanzees and bonobos, obviously make for better test subjects. But given how close these animals are to humans in terms of their cognitive and emotional capacities, they're increasingly being seen as ethically inappropriate models for experiments. Indeed, medical experiments on apes are on the way out. There's currently a great ape research ban in the Netherlands, New Zealand, the United Kingdom, Sweden, Germany, and Austria (where it's also illegal to test on lesser apes, like gibbons). In the US, where there are still over 1,200 chimps used for biomedical research, the NIH has decided to stop the practice.

Monkey in the Middle

Regrettably, all this is making monkeys increasingly vulnerable to medical testing. Given that they're primates, and that their brains and bodies are so closely related to our own, they're the logical substitute. But it's for these same reasons that they shouldn't be used in the first place.
Making matters worse, researchers are now actively trying to humanize monkeys by using gene-editing technologies, specifically the CRISPR/Cas9 system. In the latest "breakthrough," Chinese researchers successfully produced twin cynomolgus monkeys with two separate mutations, one that helps regulate metabolism, and one involved in healthy immune function.
For the most part, these monkeys are okay (setting aside the fact that they're lab monkeys who will experimented upon for the rest of their lives). But it's an important proof-of-concept that will result in more advanced precision gene-editing techniques. Eventually, researchers will be able to create monkeys with more serious conditions. More serious human conditions — like autism, schizophrenia, Alzheimer's, and severe immune dysfunction.
"We need some non-human primate models," said stem-cell biologist Hideyuki Okano in a recent Nature News article. The reason, he says, is that human neuropsychiatric disorders are particularly difficult to replicate in the simple nervous systems of mice.
That's right — monkeys with human neuropsychiatric disorders.

Where's the Ethics?

Speaking of that Nature News article — and I'm not trying to pick on them because many science journals tend to gloss over the ethical aspects of this sort of research — their coverage of this news was utterly distasteful, to say the least. Here's how they packaged it:

Awww, so adorable. Let's gush over how cute they are, but then talk about how psychologically deranged we're going to make them.
Thankfully, this breakthrough comes at a time when it's becoming (slightly) more difficult for scientists to experiment on monkeys. Back in 2012, United Airlines announced that it would stop transporting research monkeys — eliminating the last North American air carrier still available to primate researchers. Moreover, there are other options for scientists when it comes to research. 
In closing, and in the words of animal rights advocate Peter Singer, "Animals are an end unto themselves because their suffering matters."
Image: jeep2499/Shutterstock.
This article originally appeared at io9. 

February 17, 2014

Why You Should Upload Yourself to a Supercomputer


We're still decades — if not centuries — away from being able to transfer a mind to a supercomputer. It's a fantastic future prospect that makes some people incredibly squeamish. But there are considerable benefits to living a digital life. Here's why you should seriously consider uploading.

As I've pointed out before, uploading is not a given; there are many conceptual, technological, ethical, and security issues to overcome. But for the purposes of this Explainer, we're going to assume that uploads, or digital mind transfers, will eventually be possible — whether it be from the scanning and mapping of a brain, serial brain sectioning, brain imaging, or some unknown process.

Indeed, it's a prospect that's worth talking about. Many credible scientists, philosophers, and futurists believe there's nothing inherently intractable about the process. The human brain — an apparent substrate independent Turing Machine — adheres to the laws of physics in a material universe. Eventually, we'll be able to create a model of it using non-biological stuff — and even convert, or transfer, existing analog brains to digital ones.

So, assuming you'll live long enough to see it — and muster up the courage to make the paradigmatic leap from meatspace to cyberspace — here's what you have to look forward to:

An End to Basic Biological Functions

Once you're living as a stream of 1's and 0's you'll never have to worry about body odor, going to the bathroom, or having to brush your teeth. You won't need to sleep or have sex — unless, of course, you program yourself such that you'll both want and need to do these things (call it a purist aesthetic choice).



At the same time, you won't have to worry about rising cholesterol levels, age-related disorders, and broken bones. But you will have to worry about viruses (though they'll be of a radically different sort), hackers, and unhindered access to processing power.

Radically Extended Life

The end of an organic, biological human life will offer the potential for an indefinitely long one. For many, virtual immortality will be the primary appeal of uploading. So long as the supercomputer in which you reside is secure and safe (e.g. planning an exodus from the solar system when the Sun enters into its death throes), you should be able to live until the universe collapses in the Big Rip — something that shouldn't happen for another 22 billion years.

Creating Backup Copies

I spoke to futurist John Smart about this one. He's someone who's actually encouraging the development of technologies required for brain preservation and uplift. To that end, he's the Vice President of the Brain Preservation Foundation, a not-for-profit research group working to evaluate — and award — a number of scanning a preservation strategies.



Smart says it's a good idea to create an upload as a backup for your bioself while you're still alive.

"We are really underthinking the value of this," he told io9. "With molecular-scale MRI, which may be possible for large tissue samples in a few decades, and works today for a few cubic nanometers, people may do nondestructive self-scanning (uploading) of their brains while they are alive, mid- to late-21st century."

Smart says that if he had such a backup on file, he would be far more zen about his own biological death.

"I could see whole new philosophical movements opening up around this," he says. "Would you run your upload as an advisor/twin while you are alive? Or just keep him as your backup, to boot up whenever you choose to leave biolife, for whatever personal reasons? I think people will want both choices, and both options will be regularly chosen."

Making Virtually Unlimited Copies of Yourself

Related to the previous idea, we could also create an entire armada of ourselves for any number of purposes.



"The ability to make arbitrary numbers of copies of yourself, to work on tough problems, or try out different personal life choice points, and to reintegrate them later, or not, as you prefer, will be a great new freedom of uploads," says Smart. "This happens already when we argue with ourselves. We are running multiple mindset copies — and we must be careful with that, as it can sometimes lead to dissociative personality disorder when combined with big traumas — but in general, multiple mindsets for people, and multiple instances of self, will probably be a great new capability and freedom."

Smart points to the fictional example of Jamie Madrox, aka Multiple Man, the comic book superhero who can create, and later reabsorb, "dupes" of himself, with all their memories and experiences.

Dramatically Increased Clock Speed

Aside from indefinite lifespans, this may be one of the sweetest aspects of uploading. Living in a supercomputer would be like Neo operating in Bullet Time or small animals who perceive the world in slow-motion relative to humans. Living in a supercomputer, we could do more thinking, get more done, and experience more compared to wetware organisms functioning in "real time." And best of all, this will significantly increase the amount of relative time we can have in the Universe before it comes to grinding halt.



"I think the potential for increased clock speeds is the central reason why uploads are the next natural step for leading edge intelligence on Earth," says Smart. "We seem to be rushing headlong to virtual and physical "inner space."

Radically Reduced Global Footprints

Uploading is also environmentally friendly, something that could help us address our perpetually growing population — especially in consideration of radical life extension at the biological level. In fact, transferring our minds to digital substrate may actually be a matter of necessity. Sure, we'll need powerful supercomputers to run the billions — if not trillions — of individual digital experiences, but the relatively low power requirements and reduced levels of fossil fuel emissions simply can't compare to the burden we impose on the planet with our corporeal civilization.

Intelligence Augmentation

It'll also be easier to enhance our intelligence when we're purely digital. Trying to boost the cognitive power of a biological brain is prohibitively difficult and dangerous. A digital mind, on the other hand, would be flexible, robust, and easy to repair. Augmented virtual minds could have higher IQ-type intelligence, enhanced memory, and increased attention spans. We'll need to be very careful about going down this path, however, as it could lead to an out-of-control transcending upload — or even insanity.

Designer Psychologies

Uploads will also enable us to engineer and assume any number of alternative psychological modalities. Human experience is currently dominated by the evolutionary default we call neurotypicality, though outliers exist along the autistic spectrum and other so-called psychological "disorders." These customized cognitive processing frameworks will allow uploaded individuals to selectively alter the specific and unique ways in which they absorb, analyze, and perceive the world, allowing for variation in subjectivity, social engagement, aesthetics, and biases. These frameworks could also be changed on the fly, allowing uploads to change their frameworks depending on the context. Or just to try it out and feel like another person.

Enhanced Emotion Control

Somewhat related to the last one, uploaded individuals will also be able to monitor, regulate, and choose the state of their subjective well-being and emotional state, including levels of happiness.



Uploads could default to the normal spectrum of human emotion, or choose to operate within a predefined band of emotional variability — including, more conceptually, the introduction of new emotions altogether. Safety mechanisms could be built-in to prevent a person from spiraling into a state of debilitating depression — or a state of perpetual bliss, unless that's precisely what the upload is seeking.

A Better Hive Mind

The ability to link biological minds to create a kind of technologically-enabled telepathy, or techlepathy, is probably possible. But as I've pointed out before, it'll be exceptionally difficult and messy. A fundamental problem will be to translate signals, or thoughts, in a sensible way such that each person in the link-up has the same mental representation for a given object or concept. This translation problem could be overcome by developing standard brain-to-brain communication protocols, or by developing innate translation software. And of course, because all the minds are in the same computer, establishing communication links will be a breeze.

Toying WIth Alternative Physics

Quite obviously, uploads will be able to live in any number of virtual reality environments. These digital worlds will be like souped-up and fully immersive versions of Second Life or World of Warcraft. But why limit ourselves to the physics of the Known Universe when we can tweak it any number of ways? Uploads could add or take away physical dimensions, lower the effect of gravity, increase the speed of light, and alter the effects of electromagnetism. All bets are off in terms of what's possible and the kind of experiences that could be had. By comparison, life in the analog world will seem painfully limited and constrained.

Downloading to an External Body

Now, just because you've uploaded yourself to a supercomputer doesn't mean you have to stay there. Individuals will always have the option of downloading themselves into a robotic or cyborg body, even if it's just temporary. But as portrayed in Greg Egan's scifi classic, Diaspora, these ventures outside the home supercomputer will come with a major drawback — one that's closely tied to the clock speed issue: Every moment a person spends in the real, analog world will be equivalent to months or even years in the virtual world. Subsequently, you'll need to be careful about how much time you spend off the grid.

Interstellar Space Travel

As futurist Giulio Prisco has noted, it probably makes most sense to send uploaded astronauts on interstellar missions. He writes:

The very high cost of a crewed space mission comes from the need to ensure the survival and safety of the humans on board and the need to travel at extremely high speeds to ensure it's done within a human lifetime. One way to overcome that is to do without the wetware bodies of the crew, and send only their minds to the stars - their "software" — uploaded to advanced circuitry, augmented by AI subsystems in the starship's processing system...An e-crew — a crew of human uploads implemented in solid-state electronic circuitry — will not require air, water, food, medical care, or radiation shielding, and may be able to withstand extreme acceleration. So the size and weight of the starship will be dramatically reduced.

Tron Legacy concept art by David Levy.

This article originally appeared at io9.

Can we build an artificial superintelligence that won't kill us?


At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? We spoke to an expert to find out.

Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI) — a group that's dedicated to figuring out the various ways we might be able to build friendly smarter-than-human intelligence. Recently, Muehlhauser coauthored a paper with the Future of Humanity Institute's Nick Bostrom on the need to develop friendly AI.
io9: How did you come to be aware of the friendliness problem as it relates to artificial superintelligence (ASI)?
Muehlhauser: Sometime in mid-2010 I stumbled across a 1965 paper by I.J. Good, who worked with Alan Turing during World War II to decipher German codes. One paragraph in particular stood out:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.
I didn't read science fiction, and I barely knew what "transhumanism" was, but I immediately realized that Good's conclusion followed directly from things I already believed, for example that intelligence is a product of cognitive algorithms, not magic. I pretty quickly realized that the intelligence explosion would be the most important event in human history, and that the most important thing I could do would be to help ensure that the intelligence explosion has a positive rather than negative impact — that is, that we end up with a "Friendly" superintelligence rather than an unfriendly or indifferent superintelligence.
Initially, I assumed that the most important challenge of the 21st century would have hundreds of millions of dollars in research funding, and that there wouldn't be much value I could contribute on the margin. But in the next few months I learned to my shock and horror that that fewer than five people in the entire world had devoted themselves full-time to studying the problem, and they had almost no funding. So in April 2011 I quit my network administration job in Los Angeles and began an internship with MIRI, to learn how I might be able to help. It turned out the answer was "run MIRI," and I was appointed MIRI's CEO in November 2011.
Spike Jonze's latest film, Her, has people buzzing about artificial intelligence. What can you tell us about the portrayal of AI in that movie and how it would compare to artificial superintelligence?
Her is a fantastic film, but its portrayal of AI is set up to tell a good story, not to be accurate. The director, Spike Jonze, didn't consult with computer scientists when preparing the screenplay, and this will be obvious to any computer scientists who watch the film.
Without spoiling too much, I'll just say that the AIs in Her, if they existed in the real world, would entirely transform the global economy. But in Her, the introduction of smarter-than-human, self-improving AIs doesn't upset the status quo hardly at all. As economist Robin Hanson commented on Facebook:
Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only effects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one every fears it might. That's how I feel watching the movie Her.
AI theorists like yourself warn that we may eventually lose control of our machines, a potentially sudden and rapid transition driven by two factors, computing overhang and recursive self-improvement. Can you explain each of these?
It's extremely difficult to control the behavior of a goal-directed agent that is vastly smarter than you are. This problem is much harder than a normal (human-human) principal-agent problem.
If we got to tinker with different control methods, and make lots of mistakes, and learn from those mistakes, maybe we could figure out how to control a self-improving AI with 50 years of research. Unfortunately, it looks like we may not have the opportunity to make so many mistakes, because the transition from human control of the planet to machine control might be surprisingly rapid. Two reasons for this are computing overhang and recursive self-improvement.
In our paper, my coauthor (Oxford's Nick Bostrom) and I describe computing overhang this way:
Suppose that computing power continues to double according to Moore's law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a 'computing overhang': tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.
Another reason for a rapid transition from human control to machine control is the one first described by I.J. Good, what we now call recursive self-improvement. An AI with general intelligence would correctly realize that it will be better able to achieve its goals — whatever its goals are — if it does original AI research to improve its own capabilities. That is, self-improvement is a "convergent instrumental value" of almost any "final" values an agent might have, which is part of why self-improvement books and blogs are so popular. Thus, Bostrom and I write:
When we build an AI that is as skilled as we are at the task of designing AI systems, we may thereby initiate a rapid, AI-motivated cascade of self-improvement cycles. Now when the AI improves itself, it improves the intelligence that does the improving, quickly leaving the human level of intelligence far behind.
Some people believe that we'll have nothing to fear from advanced AI out of a conviction that something so astoundingly smart couldn't possibly be stupid or mean enough to destroy us. What do you say to people who believe an SAI will be naturally more moral than we are?
In AI, the system's capability is roughly "orthogonal" to its goals. That is, you can build a really smart system aimed at increasing Shell's stock price, or a really smart system aimed at filtering spam, or a really smart system aimed at maximizing the number of paperclips produced at a factory. As you improve the intelligence of the system, or as it improves its own intelligence, its goals don't particularly change — rather, it simply gets better at achieving whatever its goals already are.
There are some caveats and subtle exceptions to this general rule, and some of them are discussed in Bostrom (2012). But the main point is that we shouldn't stake the fate of the planet on a risky bet that all mind designs we might create eventually converge on the same moral values, as their capabilities increase. Instead, we should fund lots of really smart people to think hard about the general challenge of superintelligence control, and see what kinds of safety guarantees we can get with different kinds of designs.
Why can't we just isolate potentially dangerous AIs and keep them away from the Internet?
Such "AI boxing" methods will be important during the development phase of Friendly AI, but it's not a full solution to the problem for two reasons.
First, even if the leading AI project is smart enough to carefully box their AI, the next five AI projects won't necessarily do the same. There will be strong incentives to let one's AI out of the box, if you think it might (e.g.) play the stock market for you and make you billions of dollars. Whatever you built the AI to do, it'll be better able to do it for you if you let it out of the box. Besides, if you don't let it out of the box, the next team might, and their design might be even more dangerous.
Second, AI boxing pits human intelligence against superhuman intelligence, and we can't expect the former to prevail indefinitely. Humans can be manipulated, boxes can be escaped via surprising methods, etc. There's a nice chapter on this subject in Bostrom's forthcoming book from Oxford University Press, titled Superintelligence: Paths, Dangers, Strategies.
Still, AI boxing is worth researching, and should give us a higher chance of success even if it isn't an ultimate solution to the superintelligence control problem.
It has been said that an AI 'does not love you, nor does it hate you, but you are made of atoms it can use for something else.' The trick, therefore, will be to program each and every ASI such that they're "friendly" or adhere to human, or humane, values. But given our poor track record, what are some potential risks of insisting that superhuman machines be made to share all of our current values?
I really hope we can do better than programming an AI to share (some aggregation of) current human values. I shudder to think what would have happened if the Ancient Greeks had invented machine superintelligence, and given it some version of their most progressive moral values of the time. I get a similar shudder when I think of programming current human values into a machine superintelligence.
So what we probably want is not a direct specification of values, but rather some algorithm for what's called indirect normativity. Rather than programming the AI with some list of ultimate values we're currently fond of, we instead program the AI with some process for learning what ultimate values it should have, before it starts reshaping the world according to those values. There are several abstract proposals for how we might do this, but they're at an early stage of development and need a lot more work.
In conjunction with the Future of Humanity Institute at Oxford, MIRI is actively working to address the unfriendliness problem — even before we know anything about the design of future AIs. What's your current strategy?
Yes, as far as I know, only MIRI and FHI are funding full-time researchers devoted to the superintelligence control problem. There's a new group at Cambridge University called CSER that might hire additional researchers to work on the problem as soon as they get funding, and they've gathered some really top-notch people as advisors — including Stephen Hawking and George Church.
FHI's strategy thus far has been to assemble a map of the problem and our strategic situation with respect to it, and to try to get more researchers involved, e.g. via the AGI Impacts conference in 2012.
MIRI works closely with FHI and has also done this kind of "strategic analysis" research, but we recently decided to specialize in Friendly AI math research, primarily via math research workshops tackling various sub-problems of Friendly AI theory. To get a sense of what Friendly AI math research currently looks like, see these results from our latest workshop, and see my post From Philosophy to Math to Engineering.
What's the current thinking on how we can develop an ASI that's both human-friendly and incapable of modifying its core values?
I suspect the solution to the "value loading problem" (how do we get desirable goals into the AI?) will be something that qualifies as an indirect normativity approach, but even that is hard to tell at this early stage.
As for making sure the system keeps those desirable goals even as it modifies its core algorithms for improved performance — well, we're playing with toy models of that problem via the "tiling agents" family of formalisms, because toy models are a common method for making research progress on poorly-understood problems, but the toy models are very far from how a real AI would work.
How optimistic are you that we can solve this problem? And how could we benefit from a safe and friendly ASI that's not hell bent on destroying us?
The benefits of Friendly AI would be literally astronomical. It's hard to say how something much smarter than me would optimize the world if it were guided by values more advanced than my own, but I think an image that evokes the appropriate kind of sentiment would be: self-replicating spacecraft planting happy, safe, flourishing civilizations throughout our galactic supercluster — that kind of thing.
Superintelligence experts — meaning, those who research the problem full-time, and are familiar with the accumulated evidence and arguments for and against various positions on the subject — have differing predictions about whether humanity is likely to solve the problem.
As for myself, I'm pretty pessimistic. The superintelligence control problem looks much harder to solve than, say, the global risks from global warming or synthetic biology, and I don't think our civilization's competence and rationality are improving quickly enough for us to be able to solve the problem before the first machine superintelligence is built. But this hypothesis, too, is one that can be studied to improve our predictions about it. We took some initial steps in studying this question of "civilization adequacy" here.
Top: Andrea Danti/Shutterstock.
This article originally appeared at io9.

How would humanity change if we knew aliens existed?


We have yet to discover any signs of an extraterrestrial civilization — a prospect that could quite literally change overnight. Should that happen, our sense of ourselves and our place in the cosmos would forever be shaken. It could even change the course of human history. Or would it?

Top image: Josh Kao; more about this artist here.

Last week, SETI's Seth Shostak made the claim that we'll detect an alien civilization by 2040. Personally, I don't believe this will happen (for reasons I can elucidate in a future post — but the Fermi Paradox is definitely a factor, as is the problem of receiving coherent radio signals across stellar distances). But it got me wondering: What, if anything, would change in the trajectory of a civilization's development if it had definitive proof that intelligent extraterrestrials (ETIs) were real?

Finding a World Much Like Our Own

As I thought about this, I assumed a scenario with three basic elements.

First, that humanity would make this historic discovery within the next several years or so. Second, that we wouldn't actually make contact with the other civilization (just the receipt, say, of a radio transmission — something like a Lucy Signal that would cue us to their existence). And third, that the ETI in question would be at roughly the same level of technological development as our own (so they're not too much more advanced than we are; that said, if the signal came from an extreme distance, like hundreds or thousands of light-years away, these aliens would probably have advanced appreciably by now. Or they could be gone altogether, the victims of a self-inflicted disaster).

I tossed this question over to my friend and colleague Milan Cirkovic. He's a Senior Research Associate at the Astronomical Observatory of Belgrade and a leading expert on SETI.

"Well, that's a very practical question, isn't it?" he responded. "Because people have been expecting something like this since 1960 when SETI was first launched — they haven't really been expecting to find billion-year old supercivilizations or just some stupid bacteria."

Indeed, the underlying philosophy of SETI over the course of its 50-year history has been that we'll likely detect a civilization roughly equal to our own — for better or worse. And no doubt, in retrospect it started to look "for worse" when the hopes of an early success were dashed. Frank Drake and his colleagues thought they would find signs of ETIs fairly quickly, but that turned out not to be the case (though Drake's echo can still be heard in the unwarranted contact optimism of Seth Shostak).

"Enormous Implications" 

"Some people argued that a simple signal wouldn't mean much for humanity," added Cirkovic, "but I think Carl Sagan, as usual, had a good response to this."

Specifically, Sagan said that the very understanding that we are not unique in the universe would have enormous implications for all those fields in which anthropocentrism reigns supreme.

"Which means, I guess, half of all the sciences and about 99% of the other, non-scientific discourse," said Cirkovic.

Sagan also believed that the detection of a signal would reignite enthusiasm for space in general, both in terms of research and eventually the colonization of space.

"The latter point was quite prescient, actually, because at the time he said this there wasn't much enthusiasm about it and it was much less visible and obvious than it is today," he added.

No doubt — this would likely generate tremendous excitement and enthusiasm for space exploration. In addition to expanding ourselves into space, there would be added impetus to reach out and meet them.

At the same time, however, some here on Earth might counterargue that we should stay home and hide from potentially dangerous civilizations (ah, but what if everybody did this?). Ironically, some might even argue that we should significantly ramp-up our space and military technologies to meet potential alien threats.

Developmental Trajectories

In response to my query about the detection of ETIs affecting the developmental trajectory of civilizations, Cirkovic replied that both of Sagan's points can be generalized to any civilization at their early stages of development.

He believes that overcoming speciesist biases, along with a constant interest and interaction with the cosmic environment, must be desirable for any (even remotely) rational actors anywhere. But Cirkovic says there may be exceptions — like species who emerge from radically different environments, say, the atmospheres of Jovian planets. Such species would likely have a lack of interest in surrounding space, which would be invisible to them practically 99% of the time.

So if Sagan is correct, detecting an alien civilization at this point in our history would likely be a good thing. In addition to fostering science and technological development, it would motivate us to explore and colonize space. And who knows, it could even instigate significant cultural and political changes (including the advent of political parties both in support of and in opposition to all this). It could even lead to new religions, or eliminate them altogether.

Another possibility is that nothing would change. Life on Earth would go on as per usual as people work to pay their bills and keep a roof above their heads. There could be a kind of detachment to the whole thing, leading to a certain ambivalence.

At the same time however, it could lead to hysteria and paranoia. Even worse, and in twisted irony, the detection of a civilization equal to our own (or any life less advanced than us, for that matter) could be used to fuel the Great Filter Hypothesis of the Fermi Paradox. According to Oxford's Nick Bostrom, this would be a strong indication that doom awaits us in the (likely) near future — a filter that affects all civilizations at or near our current technological stage. The reason, says Bostrom, is that in the absence of a Great Filter, the galaxy should be teeming with super-advanced ETIs by now. Which it's clearly not.

Yikes. Stupid Fermi Paradox — always getting in the way of our future plans.

This article originally appeared at io9.