February 21, 2014

Bioengineered monkeys with human genetic diseases have almost arrived — and that's awful

Looking to create more accurate experimental models for human diseases, biologists have created transgenic monkeys with "customized" mutations. It's considered a breakthrough in the effort to produce more human-like monkeys — but the ethics of all this are dubious at best.
Yup, scientists know that mice models suck. Though they're used in nearly 60% of all experiments, they're among the most unreliable test subjects when it comes to approximating human biological processes (what the hell is an autistic mouse, anyway)?
Great apes, like chimpanzees and bonobos, obviously make for better test subjects. But given how close these animals are to humans in terms of their cognitive and emotional capacities, they're increasingly being seen as ethically inappropriate models for experiments. Indeed, medical experiments on apes are on the way out. There's currently a great ape research ban in the Netherlands, New Zealand, the United Kingdom, Sweden, Germany, and Austria (where it's also illegal to test on lesser apes, like gibbons). In the US, where there are still over 1,200 chimps used for biomedical research, the NIH has decided to stop the practice.

Monkey in the Middle

Regrettably, all this is making monkeys increasingly vulnerable to medical testing. Given that they're primates, and that their brains and bodies are so closely related to our own, they're the logical substitute. But it's for these same reasons that they shouldn't be used in the first place.
Making matters worse, researchers are now actively trying to humanize monkeys by using gene-editing technologies, specifically the CRISPR/Cas9 system. In the latest "breakthrough," Chinese researchers successfully produced twin cynomolgus monkeys with two separate mutations, one that helps regulate metabolism, and one involved in healthy immune function.
For the most part, these monkeys are okay (setting aside the fact that they're lab monkeys who will experimented upon for the rest of their lives). But it's an important proof-of-concept that will result in more advanced precision gene-editing techniques. Eventually, researchers will be able to create monkeys with more serious conditions. More serious human conditions — like autism, schizophrenia, Alzheimer's, and severe immune dysfunction.
"We need some non-human primate models," said stem-cell biologist Hideyuki Okano in a recent Nature News article. The reason, he says, is that human neuropsychiatric disorders are particularly difficult to replicate in the simple nervous systems of mice.
That's right — monkeys with human neuropsychiatric disorders.

Where's the Ethics?

Speaking of that Nature News article — and I'm not trying to pick on them because many science journals tend to gloss over the ethical aspects of this sort of research — their coverage of this news was utterly distasteful, to say the least. Here's how they packaged it:

Awww, so adorable. Let's gush over how cute they are, but then talk about how psychologically deranged we're going to make them.
Thankfully, this breakthrough comes at a time when it's becoming (slightly) more difficult for scientists to experiment on monkeys. Back in 2012, United Airlines announced that it would stop transporting research monkeys — eliminating the last North American air carrier still available to primate researchers. Moreover, there are other options for scientists when it comes to research. 
In closing, and in the words of animal rights advocate Peter Singer, "Animals are an end unto themselves because their suffering matters."
Image: jeep2499/Shutterstock.
This article originally appeared at io9. 

February 17, 2014

Why You Should Upload Yourself to a Supercomputer


We're still decades — if not centuries — away from being able to transfer a mind to a supercomputer. It's a fantastic future prospect that makes some people incredibly squeamish. But there are considerable benefits to living a digital life. Here's why you should seriously consider uploading.

As I've pointed out before, uploading is not a given; there are many conceptual, technological, ethical, and security issues to overcome. But for the purposes of this Explainer, we're going to assume that uploads, or digital mind transfers, will eventually be possible — whether it be from the scanning and mapping of a brain, serial brain sectioning, brain imaging, or some unknown process.

Indeed, it's a prospect that's worth talking about. Many credible scientists, philosophers, and futurists believe there's nothing inherently intractable about the process. The human brain — an apparent substrate independent Turing Machine — adheres to the laws of physics in a material universe. Eventually, we'll be able to create a model of it using non-biological stuff — and even convert, or transfer, existing analog brains to digital ones.

So, assuming you'll live long enough to see it — and muster up the courage to make the paradigmatic leap from meatspace to cyberspace — here's what you have to look forward to:

An End to Basic Biological Functions

Once you're living as a stream of 1's and 0's you'll never have to worry about body odor, going to the bathroom, or having to brush your teeth. You won't need to sleep or have sex — unless, of course, you program yourself such that you'll both want and need to do these things (call it a purist aesthetic choice).



At the same time, you won't have to worry about rising cholesterol levels, age-related disorders, and broken bones. But you will have to worry about viruses (though they'll be of a radically different sort), hackers, and unhindered access to processing power.

Radically Extended Life

The end of an organic, biological human life will offer the potential for an indefinitely long one. For many, virtual immortality will be the primary appeal of uploading. So long as the supercomputer in which you reside is secure and safe (e.g. planning an exodus from the solar system when the Sun enters into its death throes), you should be able to live until the universe collapses in the Big Rip — something that shouldn't happen for another 22 billion years.

Creating Backup Copies

I spoke to futurist John Smart about this one. He's someone who's actually encouraging the development of technologies required for brain preservation and uplift. To that end, he's the Vice President of the Brain Preservation Foundation, a not-for-profit research group working to evaluate — and award — a number of scanning a preservation strategies.



Smart says it's a good idea to create an upload as a backup for your bioself while you're still alive.

"We are really underthinking the value of this," he told io9. "With molecular-scale MRI, which may be possible for large tissue samples in a few decades, and works today for a few cubic nanometers, people may do nondestructive self-scanning (uploading) of their brains while they are alive, mid- to late-21st century."

Smart says that if he had such a backup on file, he would be far more zen about his own biological death.

"I could see whole new philosophical movements opening up around this," he says. "Would you run your upload as an advisor/twin while you are alive? Or just keep him as your backup, to boot up whenever you choose to leave biolife, for whatever personal reasons? I think people will want both choices, and both options will be regularly chosen."

Making Virtually Unlimited Copies of Yourself

Related to the previous idea, we could also create an entire armada of ourselves for any number of purposes.



"The ability to make arbitrary numbers of copies of yourself, to work on tough problems, or try out different personal life choice points, and to reintegrate them later, or not, as you prefer, will be a great new freedom of uploads," says Smart. "This happens already when we argue with ourselves. We are running multiple mindset copies — and we must be careful with that, as it can sometimes lead to dissociative personality disorder when combined with big traumas — but in general, multiple mindsets for people, and multiple instances of self, will probably be a great new capability and freedom."

Smart points to the fictional example of Jamie Madrox, aka Multiple Man, the comic book superhero who can create, and later reabsorb, "dupes" of himself, with all their memories and experiences.

Dramatically Increased Clock Speed

Aside from indefinite lifespans, this may be one of the sweetest aspects of uploading. Living in a supercomputer would be like Neo operating in Bullet Time or small animals who perceive the world in slow-motion relative to humans. Living in a supercomputer, we could do more thinking, get more done, and experience more compared to wetware organisms functioning in "real time." And best of all, this will significantly increase the amount of relative time we can have in the Universe before it comes to grinding halt.



"I think the potential for increased clock speeds is the central reason why uploads are the next natural step for leading edge intelligence on Earth," says Smart. "We seem to be rushing headlong to virtual and physical "inner space."

Radically Reduced Global Footprints

Uploading is also environmentally friendly, something that could help us address our perpetually growing population — especially in consideration of radical life extension at the biological level. In fact, transferring our minds to digital substrate may actually be a matter of necessity. Sure, we'll need powerful supercomputers to run the billions — if not trillions — of individual digital experiences, but the relatively low power requirements and reduced levels of fossil fuel emissions simply can't compare to the burden we impose on the planet with our corporeal civilization.

Intelligence Augmentation

It'll also be easier to enhance our intelligence when we're purely digital. Trying to boost the cognitive power of a biological brain is prohibitively difficult and dangerous. A digital mind, on the other hand, would be flexible, robust, and easy to repair. Augmented virtual minds could have higher IQ-type intelligence, enhanced memory, and increased attention spans. We'll need to be very careful about going down this path, however, as it could lead to an out-of-control transcending upload — or even insanity.

Designer Psychologies

Uploads will also enable us to engineer and assume any number of alternative psychological modalities. Human experience is currently dominated by the evolutionary default we call neurotypicality, though outliers exist along the autistic spectrum and other so-called psychological "disorders." These customized cognitive processing frameworks will allow uploaded individuals to selectively alter the specific and unique ways in which they absorb, analyze, and perceive the world, allowing for variation in subjectivity, social engagement, aesthetics, and biases. These frameworks could also be changed on the fly, allowing uploads to change their frameworks depending on the context. Or just to try it out and feel like another person.

Enhanced Emotion Control

Somewhat related to the last one, uploaded individuals will also be able to monitor, regulate, and choose the state of their subjective well-being and emotional state, including levels of happiness.



Uploads could default to the normal spectrum of human emotion, or choose to operate within a predefined band of emotional variability — including, more conceptually, the introduction of new emotions altogether. Safety mechanisms could be built-in to prevent a person from spiraling into a state of debilitating depression — or a state of perpetual bliss, unless that's precisely what the upload is seeking.

A Better Hive Mind

The ability to link biological minds to create a kind of technologically-enabled telepathy, or techlepathy, is probably possible. But as I've pointed out before, it'll be exceptionally difficult and messy. A fundamental problem will be to translate signals, or thoughts, in a sensible way such that each person in the link-up has the same mental representation for a given object or concept. This translation problem could be overcome by developing standard brain-to-brain communication protocols, or by developing innate translation software. And of course, because all the minds are in the same computer, establishing communication links will be a breeze.

Toying WIth Alternative Physics

Quite obviously, uploads will be able to live in any number of virtual reality environments. These digital worlds will be like souped-up and fully immersive versions of Second Life or World of Warcraft. But why limit ourselves to the physics of the Known Universe when we can tweak it any number of ways? Uploads could add or take away physical dimensions, lower the effect of gravity, increase the speed of light, and alter the effects of electromagnetism. All bets are off in terms of what's possible and the kind of experiences that could be had. By comparison, life in the analog world will seem painfully limited and constrained.

Downloading to an External Body

Now, just because you've uploaded yourself to a supercomputer doesn't mean you have to stay there. Individuals will always have the option of downloading themselves into a robotic or cyborg body, even if it's just temporary. But as portrayed in Greg Egan's scifi classic, Diaspora, these ventures outside the home supercomputer will come with a major drawback — one that's closely tied to the clock speed issue: Every moment a person spends in the real, analog world will be equivalent to months or even years in the virtual world. Subsequently, you'll need to be careful about how much time you spend off the grid.

Interstellar Space Travel

As futurist Giulio Prisco has noted, it probably makes most sense to send uploaded astronauts on interstellar missions. He writes:

The very high cost of a crewed space mission comes from the need to ensure the survival and safety of the humans on board and the need to travel at extremely high speeds to ensure it's done within a human lifetime. One way to overcome that is to do without the wetware bodies of the crew, and send only their minds to the stars - their "software" — uploaded to advanced circuitry, augmented by AI subsystems in the starship's processing system...An e-crew — a crew of human uploads implemented in solid-state electronic circuitry — will not require air, water, food, medical care, or radiation shielding, and may be able to withstand extreme acceleration. So the size and weight of the starship will be dramatically reduced.

Tron Legacy concept art by David Levy.

This article originally appeared at io9.

Can we build an artificial superintelligence that won't kill us?


At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? We spoke to an expert to find out.

Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI) — a group that's dedicated to figuring out the various ways we might be able to build friendly smarter-than-human intelligence. Recently, Muehlhauser coauthored a paper with the Future of Humanity Institute's Nick Bostrom on the need to develop friendly AI.
io9: How did you come to be aware of the friendliness problem as it relates to artificial superintelligence (ASI)?
Muehlhauser: Sometime in mid-2010 I stumbled across a 1965 paper by I.J. Good, who worked with Alan Turing during World War II to decipher German codes. One paragraph in particular stood out:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.
I didn't read science fiction, and I barely knew what "transhumanism" was, but I immediately realized that Good's conclusion followed directly from things I already believed, for example that intelligence is a product of cognitive algorithms, not magic. I pretty quickly realized that the intelligence explosion would be the most important event in human history, and that the most important thing I could do would be to help ensure that the intelligence explosion has a positive rather than negative impact — that is, that we end up with a "Friendly" superintelligence rather than an unfriendly or indifferent superintelligence.
Initially, I assumed that the most important challenge of the 21st century would have hundreds of millions of dollars in research funding, and that there wouldn't be much value I could contribute on the margin. But in the next few months I learned to my shock and horror that that fewer than five people in the entire world had devoted themselves full-time to studying the problem, and they had almost no funding. So in April 2011 I quit my network administration job in Los Angeles and began an internship with MIRI, to learn how I might be able to help. It turned out the answer was "run MIRI," and I was appointed MIRI's CEO in November 2011.
Spike Jonze's latest film, Her, has people buzzing about artificial intelligence. What can you tell us about the portrayal of AI in that movie and how it would compare to artificial superintelligence?
Her is a fantastic film, but its portrayal of AI is set up to tell a good story, not to be accurate. The director, Spike Jonze, didn't consult with computer scientists when preparing the screenplay, and this will be obvious to any computer scientists who watch the film.
Without spoiling too much, I'll just say that the AIs in Her, if they existed in the real world, would entirely transform the global economy. But in Her, the introduction of smarter-than-human, self-improving AIs doesn't upset the status quo hardly at all. As economist Robin Hanson commented on Facebook:
Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only effects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one every fears it might. That's how I feel watching the movie Her.
AI theorists like yourself warn that we may eventually lose control of our machines, a potentially sudden and rapid transition driven by two factors, computing overhang and recursive self-improvement. Can you explain each of these?
It's extremely difficult to control the behavior of a goal-directed agent that is vastly smarter than you are. This problem is much harder than a normal (human-human) principal-agent problem.
If we got to tinker with different control methods, and make lots of mistakes, and learn from those mistakes, maybe we could figure out how to control a self-improving AI with 50 years of research. Unfortunately, it looks like we may not have the opportunity to make so many mistakes, because the transition from human control of the planet to machine control might be surprisingly rapid. Two reasons for this are computing overhang and recursive self-improvement.
In our paper, my coauthor (Oxford's Nick Bostrom) and I describe computing overhang this way:
Suppose that computing power continues to double according to Moore's law, but figuring out the algorithms for human-like general intelligence proves to be fiendishly difficult. When the software for general intelligence is finally realized, there could exist a 'computing overhang': tremendous amounts of cheap computing power available to run [AIs]. AIs could be copied across the hardware base, causing the AI population to quickly surpass the human population.
Another reason for a rapid transition from human control to machine control is the one first described by I.J. Good, what we now call recursive self-improvement. An AI with general intelligence would correctly realize that it will be better able to achieve its goals — whatever its goals are — if it does original AI research to improve its own capabilities. That is, self-improvement is a "convergent instrumental value" of almost any "final" values an agent might have, which is part of why self-improvement books and blogs are so popular. Thus, Bostrom and I write:
When we build an AI that is as skilled as we are at the task of designing AI systems, we may thereby initiate a rapid, AI-motivated cascade of self-improvement cycles. Now when the AI improves itself, it improves the intelligence that does the improving, quickly leaving the human level of intelligence far behind.
Some people believe that we'll have nothing to fear from advanced AI out of a conviction that something so astoundingly smart couldn't possibly be stupid or mean enough to destroy us. What do you say to people who believe an SAI will be naturally more moral than we are?
In AI, the system's capability is roughly "orthogonal" to its goals. That is, you can build a really smart system aimed at increasing Shell's stock price, or a really smart system aimed at filtering spam, or a really smart system aimed at maximizing the number of paperclips produced at a factory. As you improve the intelligence of the system, or as it improves its own intelligence, its goals don't particularly change — rather, it simply gets better at achieving whatever its goals already are.
There are some caveats and subtle exceptions to this general rule, and some of them are discussed in Bostrom (2012). But the main point is that we shouldn't stake the fate of the planet on a risky bet that all mind designs we might create eventually converge on the same moral values, as their capabilities increase. Instead, we should fund lots of really smart people to think hard about the general challenge of superintelligence control, and see what kinds of safety guarantees we can get with different kinds of designs.
Why can't we just isolate potentially dangerous AIs and keep them away from the Internet?
Such "AI boxing" methods will be important during the development phase of Friendly AI, but it's not a full solution to the problem for two reasons.
First, even if the leading AI project is smart enough to carefully box their AI, the next five AI projects won't necessarily do the same. There will be strong incentives to let one's AI out of the box, if you think it might (e.g.) play the stock market for you and make you billions of dollars. Whatever you built the AI to do, it'll be better able to do it for you if you let it out of the box. Besides, if you don't let it out of the box, the next team might, and their design might be even more dangerous.
Second, AI boxing pits human intelligence against superhuman intelligence, and we can't expect the former to prevail indefinitely. Humans can be manipulated, boxes can be escaped via surprising methods, etc. There's a nice chapter on this subject in Bostrom's forthcoming book from Oxford University Press, titled Superintelligence: Paths, Dangers, Strategies.
Still, AI boxing is worth researching, and should give us a higher chance of success even if it isn't an ultimate solution to the superintelligence control problem.
It has been said that an AI 'does not love you, nor does it hate you, but you are made of atoms it can use for something else.' The trick, therefore, will be to program each and every ASI such that they're "friendly" or adhere to human, or humane, values. But given our poor track record, what are some potential risks of insisting that superhuman machines be made to share all of our current values?
I really hope we can do better than programming an AI to share (some aggregation of) current human values. I shudder to think what would have happened if the Ancient Greeks had invented machine superintelligence, and given it some version of their most progressive moral values of the time. I get a similar shudder when I think of programming current human values into a machine superintelligence.
So what we probably want is not a direct specification of values, but rather some algorithm for what's called indirect normativity. Rather than programming the AI with some list of ultimate values we're currently fond of, we instead program the AI with some process for learning what ultimate values it should have, before it starts reshaping the world according to those values. There are several abstract proposals for how we might do this, but they're at an early stage of development and need a lot more work.
In conjunction with the Future of Humanity Institute at Oxford, MIRI is actively working to address the unfriendliness problem — even before we know anything about the design of future AIs. What's your current strategy?
Yes, as far as I know, only MIRI and FHI are funding full-time researchers devoted to the superintelligence control problem. There's a new group at Cambridge University called CSER that might hire additional researchers to work on the problem as soon as they get funding, and they've gathered some really top-notch people as advisors — including Stephen Hawking and George Church.
FHI's strategy thus far has been to assemble a map of the problem and our strategic situation with respect to it, and to try to get more researchers involved, e.g. via the AGI Impacts conference in 2012.
MIRI works closely with FHI and has also done this kind of "strategic analysis" research, but we recently decided to specialize in Friendly AI math research, primarily via math research workshops tackling various sub-problems of Friendly AI theory. To get a sense of what Friendly AI math research currently looks like, see these results from our latest workshop, and see my post From Philosophy to Math to Engineering.
What's the current thinking on how we can develop an ASI that's both human-friendly and incapable of modifying its core values?
I suspect the solution to the "value loading problem" (how do we get desirable goals into the AI?) will be something that qualifies as an indirect normativity approach, but even that is hard to tell at this early stage.
As for making sure the system keeps those desirable goals even as it modifies its core algorithms for improved performance — well, we're playing with toy models of that problem via the "tiling agents" family of formalisms, because toy models are a common method for making research progress on poorly-understood problems, but the toy models are very far from how a real AI would work.
How optimistic are you that we can solve this problem? And how could we benefit from a safe and friendly ASI that's not hell bent on destroying us?
The benefits of Friendly AI would be literally astronomical. It's hard to say how something much smarter than me would optimize the world if it were guided by values more advanced than my own, but I think an image that evokes the appropriate kind of sentiment would be: self-replicating spacecraft planting happy, safe, flourishing civilizations throughout our galactic supercluster — that kind of thing.
Superintelligence experts — meaning, those who research the problem full-time, and are familiar with the accumulated evidence and arguments for and against various positions on the subject — have differing predictions about whether humanity is likely to solve the problem.
As for myself, I'm pretty pessimistic. The superintelligence control problem looks much harder to solve than, say, the global risks from global warming or synthetic biology, and I don't think our civilization's competence and rationality are improving quickly enough for us to be able to solve the problem before the first machine superintelligence is built. But this hypothesis, too, is one that can be studied to improve our predictions about it. We took some initial steps in studying this question of "civilization adequacy" here.
Top: Andrea Danti/Shutterstock.
This article originally appeared at io9.

How would humanity change if we knew aliens existed?


We have yet to discover any signs of an extraterrestrial civilization — a prospect that could quite literally change overnight. Should that happen, our sense of ourselves and our place in the cosmos would forever be shaken. It could even change the course of human history. Or would it?

Top image: Josh Kao; more about this artist here.

Last week, SETI's Seth Shostak made the claim that we'll detect an alien civilization by 2040. Personally, I don't believe this will happen (for reasons I can elucidate in a future post — but the Fermi Paradox is definitely a factor, as is the problem of receiving coherent radio signals across stellar distances). But it got me wondering: What, if anything, would change in the trajectory of a civilization's development if it had definitive proof that intelligent extraterrestrials (ETIs) were real?

Finding a World Much Like Our Own

As I thought about this, I assumed a scenario with three basic elements.

First, that humanity would make this historic discovery within the next several years or so. Second, that we wouldn't actually make contact with the other civilization (just the receipt, say, of a radio transmission — something like a Lucy Signal that would cue us to their existence). And third, that the ETI in question would be at roughly the same level of technological development as our own (so they're not too much more advanced than we are; that said, if the signal came from an extreme distance, like hundreds or thousands of light-years away, these aliens would probably have advanced appreciably by now. Or they could be gone altogether, the victims of a self-inflicted disaster).

I tossed this question over to my friend and colleague Milan Cirkovic. He's a Senior Research Associate at the Astronomical Observatory of Belgrade and a leading expert on SETI.

"Well, that's a very practical question, isn't it?" he responded. "Because people have been expecting something like this since 1960 when SETI was first launched — they haven't really been expecting to find billion-year old supercivilizations or just some stupid bacteria."

Indeed, the underlying philosophy of SETI over the course of its 50-year history has been that we'll likely detect a civilization roughly equal to our own — for better or worse. And no doubt, in retrospect it started to look "for worse" when the hopes of an early success were dashed. Frank Drake and his colleagues thought they would find signs of ETIs fairly quickly, but that turned out not to be the case (though Drake's echo can still be heard in the unwarranted contact optimism of Seth Shostak).

"Enormous Implications" 

"Some people argued that a simple signal wouldn't mean much for humanity," added Cirkovic, "but I think Carl Sagan, as usual, had a good response to this."

Specifically, Sagan said that the very understanding that we are not unique in the universe would have enormous implications for all those fields in which anthropocentrism reigns supreme.

"Which means, I guess, half of all the sciences and about 99% of the other, non-scientific discourse," said Cirkovic.

Sagan also believed that the detection of a signal would reignite enthusiasm for space in general, both in terms of research and eventually the colonization of space.

"The latter point was quite prescient, actually, because at the time he said this there wasn't much enthusiasm about it and it was much less visible and obvious than it is today," he added.

No doubt — this would likely generate tremendous excitement and enthusiasm for space exploration. In addition to expanding ourselves into space, there would be added impetus to reach out and meet them.

At the same time, however, some here on Earth might counterargue that we should stay home and hide from potentially dangerous civilizations (ah, but what if everybody did this?). Ironically, some might even argue that we should significantly ramp-up our space and military technologies to meet potential alien threats.

Developmental Trajectories

In response to my query about the detection of ETIs affecting the developmental trajectory of civilizations, Cirkovic replied that both of Sagan's points can be generalized to any civilization at their early stages of development.

He believes that overcoming speciesist biases, along with a constant interest and interaction with the cosmic environment, must be desirable for any (even remotely) rational actors anywhere. But Cirkovic says there may be exceptions — like species who emerge from radically different environments, say, the atmospheres of Jovian planets. Such species would likely have a lack of interest in surrounding space, which would be invisible to them practically 99% of the time.

So if Sagan is correct, detecting an alien civilization at this point in our history would likely be a good thing. In addition to fostering science and technological development, it would motivate us to explore and colonize space. And who knows, it could even instigate significant cultural and political changes (including the advent of political parties both in support of and in opposition to all this). It could even lead to new religions, or eliminate them altogether.

Another possibility is that nothing would change. Life on Earth would go on as per usual as people work to pay their bills and keep a roof above their heads. There could be a kind of detachment to the whole thing, leading to a certain ambivalence.

At the same time however, it could lead to hysteria and paranoia. Even worse, and in twisted irony, the detection of a civilization equal to our own (or any life less advanced than us, for that matter) could be used to fuel the Great Filter Hypothesis of the Fermi Paradox. According to Oxford's Nick Bostrom, this would be a strong indication that doom awaits us in the (likely) near future — a filter that affects all civilizations at or near our current technological stage. The reason, says Bostrom, is that in the absence of a Great Filter, the galaxy should be teeming with super-advanced ETIs by now. Which it's clearly not.

Yikes. Stupid Fermi Paradox — always getting in the way of our future plans.

This article originally appeared at io9.

December 27, 2013

Top 100 Songs of 2013

There were so many great songs this year. Here are the top 100:

1. Arcade Fire: Reflektor


2. Disclosure: White Noise (feat. Aluna George)


3. Chvrches: Lies

4. Savages: She Will

5. Braids: In Kind

6. Vampire: Weekend: Ya Hey

7. Fuck Buttons: Sentients

8. Kurt Vile: Wakin On A Pretty Day

9. Darkside: Golden Arrow

10. Pharmakon: Crawling on Bruised Knees

11. Boards of Canada: Palace Posy

12. Thee Oh Sees: No Spell

13. Courtney Barnett: Avant Gardener

14. Low: Plastic Cup

15. Junip: Line of Fire

16. Neko Case: Night Still Comes

17. Vampire Weekend: Step

18. Disclosure: Latch (feat. Sam Smith)

19. Washed Out: Great Escape

20. Rhye: Open


Here's the complete list:

  1. Arcade Fire: Reflektor
  2. Disclosure: White Noise (feat. Aluna George)
  3. Chvrches: Lies
  4. Savages: She Will
  5. Braids: In Kind
  6. Vampire: Weekend Ya Hey
  7. Fuck Buttons: Sentients
  8. Kurt Vile Wakin: On A Pretty Day
  9. Darkside: Golden Arrow
  10. Pharmakon: Crawling on Bruised Knees
  11. Boards of Canada: Palace Posy
  12. Thee Oh Sees: No Spell
  13. Courtney Barnett: Avant Gardener
  14. Low: Plastic Cup
  15. Junip: Line of Fire
  16. Neko Case: Night Still Comes
  17. Vampire Weekend: Step
  18. Disclosure: Latch (feat. Sam Smith)
  19. Washed Out: Great Escape
  20. Rhye: Open
  21. Doldrums: Egypt
  22. Dawn of Midi: Dysnomia
  23. Foals: Inhaler
  24. Chvrches: The Mother We Share
  25. Arcade Fire: We Exist
  26. Jon Hopkins: Open Eye Signal
  27. James Blake: Retrograde
  28. Parquet Courts: Master of My Craft/Light Up Gold
  29. Deafheaven: Dream House
  30. Danny Brown: Kush Coma Feat. A$AP Rocky & Zelooperz
  31. Darkside: The Only Shrine I've Seen
  32. Thee Oh Sees: Minotaur
  33. Bill Callahan: Javelin Unlanding
  34. Low: Clarence White
  35. Waxahatchee: Peace and Quiet
  36. James Blake: Digital Lion
  37. Disclosure: Help Me Lose My Mind (feat. London Grammar)
  38. The Knife: Full of Fire
  39. Kurt Vile: Goldtone
  40. Death Grips: Feels like a wheel
  41. Ty Segall: She Don't Care
  42. Smith Westerns: 3am Spiritual
  43. Phosphorescent: Song for Zula
  44. Blood Orange: You're Not Good Enough
  45. Run The Jewels: Banana Clipper feat. Big Boi
  46. Kanye West: Blood On The Leaves
  47. Alice In Chains: Hollow
  48. Foxygen: San Francisco
  49. Nine Inch Nails: Copy of A
  50. Charli XCX: You (Ha Ha Ha)
  51. The Flaming Lips: You Lust (feat. Phantogram)
  52. Lorde: Royals
  53. Waxahatchee: Swan Dive
  54. Savages: Hit Me
  55. Foxygen: No Destruction
  56. Ty Segall: Sleeper
  57. Forest Swords: Thor's Stone
  58. Daft Punk: Doin' it right
  59. Kanye West: Bound 2
  60. Arcade Fire: Here Comes the Night
  61. FKA twigs: Water Me
  62. Fuck Buttons: The Red Wing
  63. Death Grips: Whatever I want (Fuck who's watching)
  64. Boards of Canada: Reach For The Dead
  65. Majical Cloudz: Bugs Don't Buzz
  66. Chvuches: Recover
  67. Majical Cloudz: Childhood's End
  68. Prurient: You Show Great Spirit
  69. Drake: Hold On, We're Going Home (feat. Majid Jordan)
  70. Vampire Weekend: Hannah Hunt
  71. Danny Brown: Side B (Dope Song)
  72. Mikal Cronin: Weight
  73. Youth Lagoon: Dropla
  74. The Knife: Tooth For An Eye
  75. These New Puritans: Fragment Two
  76. Daft Punk: Get lucky
  77. Perfect Pussy: I
  78. Ty Segall: The West
  79. The National: Sea Of Love
  80. Chance The Rapper: Chain Smoker
  81. Parquet Courts : Stoned and Starving
  82. Nick Cave And The Bad Seeds: Mermaids
  83. Mutual Benefit: Golden Wake
  84. Haim: Falling
  85. Thee Oh Sees: Toe Cutter - Thumb Buster
  86. Forest Swords: The Weight Of Gold
  87. Drake: Worst Behavior
  88. M.I.A.: Come Walk With Me
  89. Chvrches: Gun
  90. Local Natives: Colombia
  91. Low: Just Make It Stop
  92. Yeah Yeah Yeahs: Sacrilege
  93. FKA twigs: Papi Pacify
  94. Deerhunter: Monomania
  95. Inter Arma: Sblood
  96. Earl Sweatshirt: Sunday (ft. Frank Ocean)
  97. Torres: Honey
  98. Savages: Shut Up
  99. Fuck Buttons: Brainfreeze
  100. Neko Case: Man


December 21, 2013

Best Albums of 2013

Another year, another amazing batch of albums. Here's the year's best.

1. Disclosure: Settle

2. Arcade Fire: Reflektor

3. Savages: Silence Yourself

4. Vampire Weekend: Modern Vampires of the City
5. Chvrches: The Bones of What You Believe

6. Dawn of Midi: Dysnomia

7. Parquet Courts: Light Up Gold

8. Boards of Canada: Tomorrow’s Harvest

9. Fuck Buttons: Slow Focus

10. Kurt Vile: Walkin On A Pretty Daze

11. Thee Oh Sees: Floating Coffin

12. Jon Hopkins: Immunity

13. Darkside: Psychic

14. James Blake: Overgrown

15. Rhye: Woman

16. Bill Callahan: Apocalypse

17. Run the Jewels: Run the Jewels

18. Deafheaven: Sunbather

19. Washed Out: Paracosm

20. Death Grips: Government Plates

Albums 21 to 50:

21. Youth Lagoon: Wondrous Bughouse
22. Ty Segall: Sleeper
23. Low: The Invisible Way
24. Speedy Ortiz: Major Arcana
25. Oneohtrix Point Never: R Plus Seven
26. Foxygen: We Are the 21st Century Ambassadors of Peace & Magic
27. Phosphorescent: Muchacho
28. Danny Brown: Old
29. The Flaming Lips: The Terror
30. The Haxan Cloak: Excavation
31. Inter Arma: Sky Burial
32. Alice in Chains: The Devil Put Dinosaurs Here
33. Carcass: Surgical Steel
34. FKA Twigs: EP2
35. Deerhunter: Monomania
36. Kanye West: Yeezus
37. Waxahatchee: Cerulean Salt
38. The Knife: Shaking the Habitual
39. Volcano Choir: Repave
40. Forest Swords: Engravings
41. Pharmakon: Abandon
42. Haim: Days Are Gone
43. Drake: Nothing Was the Same
44. Doldrums: Lesser Evil
45. Gorguts: Colored Sands
46. Of Montreal: Lousy With Sylvianbriar
47. Blood Orange: Cupid Deluxe
48. These New Puritans: Field of Reeds
49. Fuzz: Fuzz
50. Perfect Pussy: I Have Lost All Desire For Feeling

Honorable mention:

The National: Trouble Will Find Me
The Field: Cupid’s Head
Tim Hecker: Virgins
Grouper: The Man Who Died in His Boat
Charli XCX: True Romance
Autre New Veut: Anxiety
Local Natives: Hummingbird
The Naked and Famous: In Rolling Waves
Mutual Benefit: Love’s Crushing Diamond
Daft Punk: Random Access Memories
Foals: Holy Fire
Smith Westerns: Soft Will
Chance the Rapper: Acid Rap
Prurient: Through the Window
Soft Metals: Lenses
Nick Cave and the Bad Seeds: Push the Sky Away
Toro Y Moi: Anything in Return
Russian Circles: Memorial
Iceage: You’re Nothing
Kirin J Callinan
My Bloody Valentine: MBV
Mikal Cronin: Mcii
Kylesa: Ultraviolet
Janelle MonĂ¡e: The Electric Lady
Neko Case: The Worse Things Get, the Harder I Fight…
Torres: Torres
Earl Sweathshirt: Doris
Julia Holter: Loud City Song
Lorde: Pure Heroine
The Men: New Moon
Ulrich Schnauss: A Long Way to Fall
Sigur Ros: Kveikur
Nine Inch Nails: Hesitation Marks
Junip: Junip
David Bowie: The Next Day
Cult of Luna: Vertikal
Gold Panda: Half of Where You Live
White Fence: Cyclops Reap
Baths: Obsidian
Cass McCombs: Big Wheel and Others
Pelican: Forever Becoming
Arctic Monkeys: AM

December 14, 2013

Yes, One Person Could Actually Destroy the World

Apocalyptic weapons are currently the domain of world powers. But this is set to change. Within a few decades, small groups — and even single individuals — will be able to get their hands on any number of extinction-inducing technologies. As shocking as it sounds, the world could be destroyed by a small team or a person acting alone. Here's how.
To learn more about this grim possibility, I spoke to two experts who have given this subject considerable thought. Philippe van Nedervelde is a reserves officer with the Belgian Army's ACOS Strat unit who's trained in Nuclear-Biological-Chemical defense. He's a futurist and security expert with a specialization in existential risks, sousveillance, surveillance, and privacy-issues and is currently involved with, among others, the P2P Foundation. James Barrat is the author of Our Final Invention: Artificial Intelligence and the End of the Human Era — a new book concerned with the risks posed by the advent of super-powerful machine intelligence.
Both van Nedervelde and Barrat are concerned that we're not taking this possibility seriously enough.
"The vast majority of humanity today seems blissfully unaware of the fact that we actually are in real danger," said van Nedervelde. "While it is important to stay well clear of any fear mongering and undue alarmism, the naked facts do tell us that we are, jointly and severally, in 'clear and present' mortal danger. What is worse is that a kind of 'perfect storm' of coinciding and converging existential risks is brewing."
If we're going to survive the next few millennia, he says, we are going to need to get through the next few critical decades as unscathed as we can.

Weapons of Choice

As a species living in an indifferent universe, we face both cosmic and human-made existential risks.
According to van Nedervelde, the most serious human-made risks include a bio-attack pandemic, a global exchange of thermonuclear bombs, the emergence of an artificial superintelligence that's unfriendly to humans, and the spectre of nanotechnology-enabled weapons of mass destruction.
"The threat of a bio-attack or malicious man-made pandemic is potentially particularly dangerous in the relatively short term," he says.
Indeed, a still fairly recent 20th century precedent showed us how serious it could be, even though in that case it was 'just' a natural pandemic: the 1918 Spanish flu which killed between 50 and 100 million people. That was between 2.5% and 5% of the entire global population at that time.
"Humanity has developed the technology needed to design effective and efficient pathogens," he told io9. "We dispose of the know-how needed to optimize their functioning and combine them for potency. If developed for that purpose, weaponized pathogens may ultimately succeed in killing nearly all and possibly even all of humanity."
With regard to predictable future forms of weaponized nanotechnology, nanomedicine theorist Robert Freitas has distinguished between 'aerovores' a.k.a. 'grey dust', 'grey plankton', 'grey lichens', and so-called 'biomass killers'. These are variations on the grey goo threat — a hellish scenario in which self-replicating molecular robots completely consume the Earth or resources critical for human survival, like the atmosphere.
Aeorovores would blot out all sunlight. Grey plankton would consist of seabed-grown replicators that eat up land-based carbon-rich ecology, grey lichens could destroy land-based geology, and biomass killers would attack various organisms.
And lastly, as Barrat explained to me, there's the threat of artificial superintelligence. Within a few decades, AI could surpass human intelligence by an order of magnitude. Once unleashed, it could have survival drives much like our own, or it could be poorly programmed. We may be forced to compete with a rival that exceeds our capacities in ways we can scarcely imagine.

Destroying More With Less

Obviously, many, if not all, of these technologies will be developed by either highly-funded and highly-motivated government agencies and corporations. But that doesn't mean the blueprints for these things won't eventually make their way into the hands of nefarious groups, or that they won't be able to figure many of these things out for themselves.
It's a prospect that's not lost on the Pentagon. Speaking back in 1995, Admiral David E. Jeremiah of the US Joint Chiefs of Staff had this to say:
Somewhere in the back of my mind I still have this picture of five smart guys from Somalia or some other non-developed nation who see the opportunity to change the world. To turn the world upside down. Military applications of molecular manufacturing have even greater potential than nuclear weapons to radically change the balance of powers.
And as the White House US National Security Council has stated, "We are menaced less by fleets and armies than by catastrophic technologies in the hands of the embittered few."
In this context, van Nedervelde talked to me about 'Asymmetric Destructive Capability' (ADC).
"It means that with advancing technology there is ever less needed to destroy ever more," he told me. "Large-scale destruction becomes ever more possible with ever fewer resources. Predictably, the NBIC convergence exacerbates and accelerates the possible exponential increase of this asymmetry."
By NBIC, van Nedervelde is referring to the convergent effects of four critical technology sectors, namely nanotechnology (the manipulation of matter at the molecular scale, including the advent of radically advanced materials, medicines, and robotics), biotechnology, information technology, and cognitive science.
For example, and in an estimation verified by explorative nanorobotics engineer Robert Freitas and others, the resources needed to develop and deploy a nanoweapon of mass destruction could be realized as soon as 2040, give or take 10 years.
"To pull it off, a small, determined team seeking massive destruction would need the following soon-to-be-relatively-modest resources: off-the-shelf nanofacturing equipment capable of creating seed 'replibots'; four mediocre nano-engineering PhDs; four off-the-shelf supercomputers; four — possibly less — months of development time; and four dispersion points optimized for global prevailing wind patterns," explained van Nedervelde.
He described it as 'ADC on steroids': "Compared to technologically mature, future nanotech weapons of mass destruction — nukes are small fry."

Massively Destructive Single Individuals

van Nedervelde also warned me about SIMADs, short for 'Single Individual, MAssively Destructive'.
"If you think ADC through to its logical conclusions, we have actually less to fear from a terrorist organization, small as it is, as Al-Qaeda or such, than from smart individuals who have developed a deep-seated, bitterly violent grudge against human society or the human species," he says.
The Unabomber case provides a telling example. Now imagine a Unabomber on science-enabled steroids, empowered by NBIC-converged technologies. Such an individual would conceivably have the potential to wreak destruction and cause death at massive scales: think whole cities, regions, continents, possibly even the entire planet.
"SIMAD is one of the risks that I worry about the most," he says. "I have lost sleep over this one."
I asked Barrat if a single individual could actually have what it takes to create a massively destructive AI.
"I don't think a single individual could come up with AI strong enough to go catastrophically rogue," he responded. "The software and hardware challenges of creating Artificial General Intelligence (AGI), the stepping stone to more volatile ASI — artificial superintelligence — is closer in scale and complexity to the Manhattan Project to make an atomic bomb (which cost $26 billion, at today's valuation) than it is to the kind of insights 'lone geniuses' like Tesla, Edison, and Einstein periodically rack up in other fields."
He's also skeptical that a small team could do it.
"With deep pocketed contestants in the race like IBM, Google, the Blue Brain Project, DARPA, and the NSA, I also doubt a small group will achieve AGI, and certainly not first."
The reason, says Barrat, is that all contenders — large, small, stealth, and spook— are fueled by the knowledge that commoditized AGI — or human level intelligence at computer prices — will be the most lucrative and disruptive technology in the history of the world. Imagine banks of thousands of PhD quality "brains" cracking cancer research, climate modeling, weapons development.
"In AGI, ultimate financial enticement meets ultimate existential threat," says Barrat.
Barrat says that in this race, and for investors especially, the impressive real world achievements of corporate giants resonate.
"Small groups with little history, not so much," he says. "IBM's team Watson had just 15 core members, but it also had contributions from nine universities and IBM's backing. Plus IBM's nascent cognitive computing architecture is persuasive — who's seen a PBS NOVA or read even 1,000 words about anyone else's? Small groups have growth potential, but little of this leverage. I expect IBM to take on the Turing Test in the early 2020's, probably with a computer named, yup, Turing. "
Regrettably, this doesn't preclude the possibility that, eventually, a malevolent terrorist group couldn't get their hands on some sophisticated code, make the required tweaks, and then unleash it onto the world's digital infrastructure. It might not be apocalyptic in scope, but it could still be potentially destructive.
There's also the possibility that a crafty team or individual could use more rudimentary instantiations of AI to develop powerful machine intelligence. It's conceivable that an SAI could be developed indirectly by humans, with AI doing the lion's share of the work. Or it could come into being through some other, unknown channel. Personally, I think a small team could unleash a rogue ASI onto the world, though not for a very, very long time.

Protecting Ourselves

Not content to just discuss gloom-and-doom, we also talked about preventative measures. Now, one way we could protect ourselves from these threats is to turn all of society into a totalitarian police state. But no one wants that. So I asked both van Nedervelde and Barrat if there's anything else we could do.
"The good news is that we are not totally defenseless against these threats," said van Nedervelde. "Precautions, prevention, early warning and effective defensive countermeasuresare possible. Most of these are not even 'draconian' ones, but they do require a sustained resolve for prophylaxis."
He envisions the psychological monitoring of people displaying sustained and significantly deviant behavior within education systems and other institutions.
"Basically something like a humanity-wide psychological immune system: on-going screening to spot those SIMAD Unabombers when they are young and hopefully long before they turn to carrying out malicious plans," he told io9. "To that end, there could be mental behavior monitoring within existing security systems and mental health monitoring and improvement within public health systems."
He also thinks that global governance could be improved so that "organizations like the UN and other transnational organizations can be credibly effective at rapidly reacting suitably whenever an existential threat rears its ugly head."
He says we can also anticipate ADC or SIMAD attacks in order to counter them as they are happening. To defend ourselves against weaponized nanotechnology, we could deploy emergency defenses such as utility fog, solar shades, EMP bursts, and targeted radiation.
As for protecting ourselves against a rogue AI, Barrat says the question presumes that small organizations are more unstable and in need of oversight than large ones.
"But look again," he warns. "Right now the NSA with its $50 billion black budget represents a far greater threat to the US constitution than Al-Qaeda and all the AGI wannabes put together. We instinctively know they won't be less wayward with AGI should they achieve it first."
Barrat suggests two one-size-fits-all stop gaps:
"Create a global public-private partnership to ride herd on those with AGI ambitions, something like the International Atomic Energy Agency (IAEA). Until that organization is created, form a consortium with deep pockets to recruit the world's top AGI researchers. Convince them of the dangers of unrestricted AGI development, and help them proceed with utmost caution. Or compensate them for abandoning AGI dreams."

The Surveillance State

More radically, van Nedervelde has come up with the concept of the 4 E's: "Everyone has Eyes and Ears Everywhere," an idea that could become reality via another acronym that he coined: Panoptic Smart Dust Sousveillance (PSDS).
"Today, 'smart dust' refers to tiny MEMS devices nicknamed 'motes' measuring one cubic millimeter or smaller capable of autonomous sensing, computation and communication in wireless ad-hoc mesh networks," he explained. "In the not too far future, NEMS will enable quite literal 'smart dust' motes so small — 50 cubic microns or smaller — that they will be able to float in the air just like 'dumb dust' particles of similar size and create solar-powered mobile sensing 'smart clouds'."
He imagines the lower levels of the Earth's atmosphere filled with smart dust motes at an average density of three motes per cubic yard of air. If engineered, deployed, maintained and operated by the global citizenry for the global citizenry, this would create a 'Panoptic Smart Dust Sousveillance' (PSDS) system — essentially a citizen's sousveillance network effectively giving Everyone Eyes and Ears Everywhere, and thereby effectively and efficiently realizing — or at least enabling in the sense of making possible — so-called 'reciprocal accountability' throughout civilized society.
"Assuming that most of the actual sousveillance would not be done by humans but by pattern-spotting machines instead, this would indeed be the end of what I have called 'absolute privacy' — still leaving most with, in my view acceptable, 'relative privacy' — but most probably also the end of SIMAD or other terrorist attacks as well as, for instance, the end of violence and other forms of abuse against children, women, the elderly and other victims of domestic violence and other abuse."
He claims it would likely also bring most forms of corruption and other crimes to a screeching halt. It would create the ultimate form of what David Brin has called the Transparent Society, or what ethical futurist Jamais Cascio has referred to as the Participatory Panopticon.
"We would finally have an answer to Juvenal's question from Roman antiquity "Quis custodiet ipsos custodes?' (Who watches the watchers?)," said van Nedervelde. "And the answer will be: We, the people, the citizenry, ourselves — which would be wholly appropriate, in my view."
Follow me on Twitter: @dvorsky
This article originally appeared at io9.