June 13, 2014

Sign up for my new course: "Superintelligence and the Coming Technological Singularity"


This is an online course that will run for the month of July.

Description:

This class introduces the key concepts and theories as they pertain to the hypothesized Technological Singularity, or Intelligence Explosion. We will survey the history of the idea, the state of artificial intelligence today, and the theoretical underpinnings that give rise to the prospect of greater-than-human machine intelligence. Topics to be discussed will include the brain-as-computer analog, Accelerating Change, brain mapping initiatives, whole brain emulations versus rules-based AI, the various definitions of the Singularity, and Friendly. Along the way we will discuss the benefits and risks posed by machine superintelligence, and the ethical considerations involved.

It's only $59, so sign up here.


May 15, 2014

How Would Humanity Change If We Knew Aliens Existed?


We have yet to discover any signs of an extraterrestrial civilization — a prospect that could quite literally change overnight. Should that happen, our sense of ourselves and our place in the cosmos would forever be shaken. It could even change the course of human history. Or would it?
Top image: Josh Kao; more about this artist here.
Last week, SETI's Seth Shostak made the claim that we'll detect an alien civilization by 2040. Personally, I don't believe this will happen (for reasons I can elucidate in a future post — but the Fermi Paradox is definitely a factor, as is the problem of receiving coherent radio signals across stellar distances). But it got me wondering: What, if anything, would change in the trajectory of a civilization's development if it had definitive proof that intelligent extraterrestrials (ETIs) were real?

Finding a World Much Like Our Own

As I thought about this, I assumed a scenario with three basic elements.

First, that humanity would make this historic discovery within the next several years or so. Second, that we wouldn't actually make contact with the other civilization (just the receipt, say, of a radio transmission — something like a Lucy Signal that would cue us to their existence). And third, that the ETI in question would be at roughly the same level of technological development as our own (so they're not too much more advanced than we are; that said, if the signal came from an extreme distance, like hundreds or thousands of light-years away, these aliens would probably have advanced appreciably by now. Or they could be gone altogether, the victims of a self-inflicted disaster).
I tossed this question over to my friend and colleague Milan Cirkovic. He's a Senior Research Associate at the Astronomical Observatory of Belgrade and a leading expert on SETI.
"Well, that's a very practical question, isn't it?" he responded. "Because people have been expecting something like this since 1960 when SETI was first launched — they haven't really been expecting to find billion-year old supercivilizations or just some stupid bacteria."
Indeed, the underlying philosophy of SETI over the course of its 50-year history has been that we'll likely detect a civilization roughly equal to our own — for better or worse. And no doubt, in retrospect it started to look "for worse" when the hopes of an early success were dashed. Frank Drake and his colleagues thought they would find signs of ETIs fairly quickly, but that turned out not to be the case (though Drake's echo can still be heard in the unwarranted contact optimism of Seth Shostak).

"Enormous Implications"

"Some people argued that a simple signal wouldn't mean much for humanity," added Cirkovic, "but I think Carl Sagan, as usual, had a good response to this."

Specifically, Sagan said that the very understanding that we are not unique in the universe would have enormous implications for all those fields in which anthropocentrism reigns supreme.
"Which means, I guess, half of all the sciences and about 99% of the other, non-scientific discourse," said Cirkovic.
Sagan also believed that the detection of a signal would reignite enthusiasm for space in general, both in terms of research and eventually the colonization of space.
"The latter point was quite prescient, actually, because at the time he said this there wasn't much enthusiasm about it and it was much less visible and obvious than it is today," he added.
No doubt — this would likely generate tremendous excitement and enthusiasm for space exploration. In addition to expanding ourselves into space, there would be added impetus to reach out and meet them.
At the same time, however, some here on Earth might counterargue that we should stay home and hide from potentially dangerous civilizations (ah, but what if everybody did this?). Ironically, some might even argue that we should significantly ramp-up our space and military technologies to meet potential alien threats.
Developmental Trajectories
In response to my query about the detection of ETIs affecting the developmental trajectory of civilizations, Cirkovic replied that both of Sagan's points can be generalized to any civilization at their early stages of development.
He believes that overcoming speciesist biases, along with a constant interest and interaction with the cosmic environment, must be desirable for any (even remotely) rational actors anywhere. But Cirkovic says there may be exceptions — like species who emerge from radically different environments, say, the atmospheres of Jovian planets. Such species would likely have a lack of interest in surrounding space, which would be invisible to them practically 99% of the time.
So if Sagan is correct, detecting an alien civilization at this point in our history would likely be a good thing. In addition to fostering science and technological development, it would motivate us to explore and colonize space. And who knows, it could even instigate significant cultural and political changes (including the advent of political parties both in support of and in opposition to all this). It could even lead to new religions, or eliminate them altogether.
Another possibility is that nothing would change. Life on Earth would go on as per usual as people work to pay their bills and keep a roof above their heads. There could be a kind of detachment to the whole thing, leading to a certain ambivalence.
At the same time however, it could lead to hysteria and paranoia. Even worse, and in twisted irony, the detection of a civilization equal to our own (or any life less advanced than us, for that matter) could be used to fuel the Great Filter Hypothesis of the Fermi Paradox. According to Oxford's Nick Bostrom, this would be a strong indication that doom awaits us in the (likely) near future — a filter that affects all civilizations at or near our current technological stage. The reason, says Bostrom, is that in the absence of a Great Filter, the galaxy should be teeming with super-advanced ETIs by now. Which it's clearly not.
Yikes. Stupid Fermi Paradox — always getting in the way of our future plans.
Follow me on Twitter: @dvorsky
This article originally appeared at io9.

10 Futurist Phrases And Terms That Are Complete Bullshit



I recently wrote about 20 terms every self-respecting futurist should know, but now it's time to turn our attention to the opposite. Here are 10 pseudofuturist catchphrases and concepts that need to be eliminated from your vocabulary.
Top image: Screen grab from Elysium.

1. "Transcendence"



Some futurists toss this word around in a way that's not too far removed from its religious roots. The hope is that our technologies can help us experience our existence beyond normal or physical bounds. Now, it very well may be true that we'll eventually learn how to emulate brains in a computer, but it's an open question as to whether or not we'll be able to transfer consciousness itself. In other words, the future may not for us — it'll be for our copies. So it's doubtful any biological being will ever literally experience the process of transcension (just the illusion of it). 
What's more, life in a "transcendent" digitized realm, while full of incredible potential, will be no walk in the park; full release, or transcendence, is not likely an achievable goal. Emulated minds, or ems, will be prone to hacking, deletion, unauthorized copying, and subsistence wages. Indeed, a so-called uploaded mind may be free from its corporeal form, but it won't be free from economic and physical realities, including the safety and reliability of the supercomputer running the ems, and the costs involved in procuring sufficient processing power and storage space.

2. "The Singularity"

Vernor Vinge co-opted this term from cosmology as a way to describe a blind spot in our predictive thinking, or more specifically our inability to predict what will happen after the advent of greater-than-human machine intelligence. But since that time, the Technological Singularity has degenerated to a term void of any true meaning.
In addition to its quasi-religious connotations, it has become a veritable Rorschach Test for futurists. The Singularity has been used to describe accelerating change or a future time when progress in technology occurs almost instantly. It has also be used to describe humanity's transition into a posthuman condition, mind uploads, and the advent of a utopian era. Because of all the baggage this term has accumulated, and because the peril that awaits us coming clearer into focus (e.g. the Intelligence Explosion), it's a term that needs to be put to bed, replaced by more substantive and unambiguous hypotheses.

3."Technology Will Save the Future"



I wholeheartedly agree that we should use technology to build the kind of future we want for ourselves and our descendants. Absolutely. But it's important for us to acknowledge the challenges we're sure to face in trying to do so and the unintended consequences of our efforts.
Technology is a double-edged sword that's constantly putting us on the defensive. Our inventions often produce outcomes that need to be provisioned for. Guns have produced the need for gun control and bulletproof vests. Software has produced the need for antivirus programs and firewalls. Industrialization has resulted in labour unions, climate change, and the demand for geoengineering efforts. Airplanes have been co-opted as terrorist weapons. And on and on and on.


The evolution of our technologies could result in a future in which our planet is wrecked and depleted, our privacy gone, our civil liberties severely curtailed, and our political, social and economic structures severely altered. So while we should still strive to create the future, we must remember that we're going to have to adapt to this future. 

4. "Will"

We often speak about things that will happen in the future as if there's a certain inevitability to it, or as if we're masters of our own destinies. Trouble is, different people have different visions of the future depending on their needs, values, and place of privilege; there will always be a tension arising from competing interests. What's more, we will undoubtedly hit some intractable technological and economic barriers along the way, not to mention some black swans (unexpected events) and mules (unexpected events beyond our current understanding of how the world works).
Another perspective comes from Jayar LaFontaine, a Foresight Strategist with Idea Couture. He told me,
The word "will" is wildly overused by futurists. It's small and innocuous, so it can be slipped into speech to create a sense of authority which is almost always inappropriate. More often than not, it indicates a futurist's personal biases on a subject rather than any serious assessment of certainty. And it can shut down fruitful conversations about the future, which for me is the whole point.

5. "Immortality"

Some folks in the radical life extension and transhumanist communities like to talk about achieving "immortality." Indeed, there's a very good chance that future humans will eventually enter into a state of so-called negligible senescence (the cessation of aging) — a remarkable development that will likely come about through the convergence of several tech sectors, including biotechnology, cybernetics, neuroscience, molecular nanotechnology, and others. But it's a prospect that has been taken just a bit too far.


The Fountain of Youth, 1546 painting by Lucas Cranach the Elder.
First, accidental or unavoidable deaths (like getting hit by a streetcar, being murdered, or inadvertently flying a spacecraft into a supernova) will always be a part of the human — or posthuman — condition. Indeed, the longer we live, the greater chance we have of getting killed in one way or another. Second, the universe is a finite thing — which means our existence is finite, too. That could mean an ultimate fate decided by the heat death of the universe, the Big Crunch, or the Big Rip. And contrary to the thinking of Frank Tipler, there's no loop hole — not even a life-resurrecting Omega Point.  

6. "Disruptive"



Virtually every gadget that comes out of Silicon Valley these days is heralded as being disruptive. I don't think this word means what these companies think it means.
Honestly, for a technology to be truly disruptive it has to shake the foundations of society. Looking back through history, it's safe to say that the telegraph, trains, automobiles, and the Internet were truly disruptive. Looking ahead, it'll be various developments in molecular assembly, the social and economic consequences of mass automation, and the proliferation of AI and AGI.

7. "Future Shock"

This is a term that's getting old fast. 


Sure, such a thing may have existed in the early 1970s when Alvin Toffler first came up with the idea (though I doubt it), but does anyone truly suffer from "future shock"? Toffler described it as "shattering stress and disorientation" caused by "too much change in too short a period of time," but I don't recall seeing it in the latest edition of the DSM-V
No doubt, many folks in our society rail against change — like resistance to gay marriage or universal healthcare — but it would be inaccurate and unfair to refer to them as being in a state of shock. Reactionary, maybe.

8. "Moore's Law"

Nope, not a law. At best it's a consistent empirical regularity — and a fairly obvious one at that. Yes, processing speed is getting faster and faster. But why fetishize it by calling it a law? There are other similar observable regularities, including steady advancements in software, telecommunications, materials miniaturization, and even biotechnology. An in fact, mathematical "laws" can predict industrial growth and productivity in many sectors. What's more, Moore's Law is a poor barometer of progress (something it's often used for), particularly social and economic progress.

9. "The Robot Apocalypse"

Let's assume for a moment that an artificial superintelligence eventually emerges and it decides to destroy all humans (a huge stretch given that it's more likely to do this by accident or because it's indifferent). Because AI is often conflated with robotics, many people say the ensuing onslaught is likely to arrive in the form of marauding machines — the so-called robopocalypse.
Okay, sure, that's certainly one way a maniacal ASI could do it, but it's hardly the most efficient. A more likely scenario would involve the destruction of the atmosphere or terrestrial surface with some kind of nanophage. Or, it could infect the entire population with a deadly virus. Alternately, it could poison all water and the food supply. Or something unforeseen — it doesn't matter. The point is that it doesn't need to go to such clunky lengths to destroy us should it choose to do so.

10. "The End Of Humanity"



This one really bugs me. It's both misanthropic and an inaccurate depiction of the future. Some people have gotten it into their heads that the advent of the next era of human evolution necessarily implies the end of humanity. This is unlikely. Not only will biological, unmodified humans exist in the far future, they will always reserve the right to stay that way. So-called transhumans and posthumans are likely to exist (whether they be genetically modified, cybernetic, or digital), but they'll always inhabit a world occupied by regular plain old Homo sapiens
(image: bikeriderlondon/Shutterstock)
Follow me on Twitter: @dvorsky
This article originally appeared at io9.

This Could Be the First Animal to Live Entirely Inside a Computer



Animals are exceptionally complicated things. So complicated, in fact, that we've never actually built one ourselves. But the day is fast approaching when we'll be able to create digital versions of organisms on a computer — from the way they move right through to their behaviors. Here's how we'll do it.
I spoke to neuroscientist Stephen Larson, a co-founder and project coordinator for the OpenWorm project. His team is busy at work trying to create a digital version of an actual nematode worm in a computer. 
But before we get to our conversation, let's do a quick review. 

The Path To Virtual Organisms

To be fair, scientists have already created a computational model of an actual organism, namely the exceptionally small free-living bacteria known as Mycoplasma genitalia. It's an amazing accomplishment, but the pathogen — with its 525 genes — is one of the world's simplest organisms. Contrast that with E. coli, which has 4,288 genes, and humans, who have anywhere from 35,000 to 57,000 genes.
Scientists have also created synthetic DNA that can self-replicate and an artificial chromosome from scratch. Breakthroughs like these suggest it won't be much longer before we start creating synthetic animals for the real world. Such endeavors could result in designer organisms to help in the manufacturing of vaccines, medicines, sustainable fuels, and with toxic clean-ups.
There's a very good chance that many of these organisms, including drugs, will be designed and tested in computers first. Eventually, our machines will be powerful enough and our understanding of biology deep enough to allow us to start simulating some of the most complex biological functions — from entire microbes right through to the human mind itself (what will be known as whole brain emulations).
Needless to say we're not going to get there in one day. We'll have to start small and work our way up. Which is why Larson and his team have started to work on their simulated nematode worm.

Analog and Digital Worlds Converge

To kick off our conversation, I asked Larson to clarify what he means by "simulation." How is it, exactly, that biological attributes can be translated to the digital realm?
"At the end of the day, biology must obey the laws of physics," he responded. "Our project is to simulate as much of the important physics — or biophysics — of the C. elegans as we can, and then compare against measurements from real worms. When we say simulation, we are specifically referring to writing computer programs that use equations from physics that are applied to what we know about the worm."
This, he says, is what's allowing them to predict what its cells are doing and how they add up to the overall physiology and behavior of a worm.
But why C. elegans?
"This tiny worm is by far the most understood and studied animal with a brain in all of biology," he says. "All of the ~1,000 cells of this organism have been mapped, including a tiny brain composed of 302 neurons and their network composed by give-or-take 5,500 connections."
Additionally, Larson says that three different Nobel prizes have been awarded for work on this worm, and it is increasingly being used as a model to gain an enhanced understanding of disease and health relevant to all organisms, including humans.
"When making a complex computer model, it is important to start where the data are the most complete," he says.

Simulation Versus Emulation

We also talked about the various attributes of the worm they're trying to digitize. Given that they're also trying to simulate its brain, I wondered if their project is aimed more at emulation than simulation.
"We are currently addressing the challenge of closing the 'brain-behavior loop' in C. elegans," he says. "In other words, through this simulation we want to understand how its proto-brain controls its muscles to move its body around an environment, and then how the environment is interpreted by the proto-brain. That means leaving aside reproduction or digestion or other internal functions for now until that first part is complete. Once we get there, we will move on to these other aspects.
As for the emulation versus simulation distinction, Larson says that, when it comes to brains, he's seen the two terms used interchangeably: "I'm not sure there is a meaningful difference."
On this point I actually disagree. A simulation seeks to recreate an approximation or appearance of something, whereas an emulation seeks to recreate de facto functionality. So, if the Open Worm project is successful, and the brain of a nematode worm perfectly recreated in the digital realm, we'd be talking about an emulation and not a simulation. This is an important distinction from an ethical perspective, because there's the potential for harm, and consequently, moral consideration.

An Incomplete Map

Larson, who has a bachelor of science and master of engineering from MIT in computer science, along with a Ph.D in neuroscience from the University of California, San Diego, also told me about some of the challenges they're facing.
"Despite being the best understood animal, there are still aspects of this worm on the frontier of our understanding of biology as a whole that biologists in this field do not have complete data for, and this obviously limits us," he told io9.
For example, he described to me how neuroscientists have made progress by poking a sharp glass electrode into a neuron from a mouse or rat to analyze neuronal electrical behavior.
"However, this is much more difficult to do in worms so it hasn't been done as much, and as a consequence there is not as much data present," he says. "However, recently scientists are using breakthroughs in optical imaging of neuronal behavior and laser control of neurons to catch up to the last 50 years of understanding neurons in rodents."
Larson says there's an explosion of data on its way and they're doing their best to collect as much insight from this work so that they can build these neural behaviors into their model.
"We can also use some clever tricks from computer science to help us fill in some of the gaps," he adds. "The good news is that this will only get easier as the tools and techniques get better over time."
Speaking of tools, the Open Worm team is utilizing modern programming languages like Java, Python, C++, and related technologies. They're also using a lot of cutting edge open source libraries across all these languages. And for organizing themselves online, they've been using GitHub, Google Drive, and Google+ Hangouts.

Progress to Date

The first major goal of Open Worm is to connect a simulation that deals with the body, muscles, and environment of a worm to a simulation that deals with the neurons and neuronal activity of the worm.
"We've spent the last three years making these two elements as accurate as possible," he told me. "Late last year we were pleased that we got the part dealing with the body, muscles, and environment to do a simple 'wiggle.' Even this extremely simple behavior was exciting because it showed proof of concept — we could create a sophisticated simulated C. elegans body that we could eventually do sophisticated brain-behavior in silico experiments with.  
Looking ahead, the team is working on a few different areas based on the interests of their contributors
"We are refining a published data set of real worm behaviors into a form where we can automatically compare our model to real data," he says. "We are connecting the body model to the nervous system model."
They're also working to make all of it more accessible to the internet via the web.

Open and Crowdfunded

One of the more exciting aspects of this project is the open source nature of it all. Larson says that every line of code produced by the project is shared on GitHub as it is written, meaning that anyone in the world can watch as they assemble the simulation.
"Our roadmap is open too, so anyone can see where we are going and participate," Larson told io9.  "We also hold scientific discussions online that you can see on our YouTube channel.  Essentially we try to invert the weakness of not being able to meet in person very often into a strength of transparency in our communications over the internet."
The Open Worm team is also launching a Kickstarter campaign (launching April 19).
We're raising money to enable us to put the simulation — which we're calling a WormSim for simplicity — up online and accessible through your web browser," he says. "This will make the experience of seeing the results of OpenWorm much more tangible for folks because they'll be able to explore the activity of the model in a 3D virtual world for themselves. Today we have more static representations of the worm we have already put online and these are already being used by scientists around the world."

A Template For the Future

Encouragingly, the Open Worm approach to simulating an organism could easily translate to similar projects. Indeed, they've already received inquiries from groups who are doing related projects on fruit flies and ants to explore collaborations.
"We hope that what we do here in C. elegans will create a template that can be used for other organisms, but for the moment we're sticking with showing that this way works first.
Images: openworm.org
Follow me on Twitter: @dvorsky
This article originally appeared at io9.