June 29, 2011

Hear that? It's the Singularity coming.

The idea of a pending technological Singularity is under attack again with a number of prominent futurists arguing against the possibility—the most prominent being Charlie Stross and his astonishingly unconvincing article, "Three arguments against the singularity." While it’s not my intention to write a comprehensive rebuttal at this time, I would like to bring something to everyone’s attention: The early rumblings of the coming Singularity are becoming increasingly evident and obvious.

Make no mistake. It's coming.

As I’ve discussed on this blog before, there are nearly as many definitions of the Singularity as there are individuals who are willing to talk about it. The whole concept is very much a sounding board for our various hopes and fears about radical technologies and where they may bring our species and our civilization. It’s important to note, however, that at best the Singularity describes a social event horizon beyond which it becomes difficult, if not impossible, to predict the impact of the advent of recursively self-improving greater-than-human artificial intelligence.

So, it’s more of a question than an answer. And in my own attempt to answer this quandary, I have personally gravitated towards the I.J. Good camp in which the Singularity is characterized as an intelligence explosion. In 1965 Good wrote,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
This perspective and phrasing sits well with me, mostly because I already see signs of this pending intelligence explosion happening all around us. It’s becoming glaringly obvious that humanity is offloading all of it’s capacities, albeit in a distributed way, to its technological artifacts. Eventually, these artifacts will supersede our capacities in every way imaginable, including the acquisition of new ones altogether.

A common misnomer about the Singularity and the idea of greater-than-human AI is that it will involve a conscious, self-reflective, and even morally accountable agent. This has led some people to believe that it will have deep and profound thoughts, quote Satre, and resultantly act in a quasi-human manner. This will not be the case. We are not talking about artificial consciousness or even human-like cognition. Rather, we are talking about super-expert systems that are capable of executing tasks that exceed human capacities. It will stem from a multiplicity of systems that are individually singular in purpose, or at the very least, very limited in terms of functional scope. And in virtually all cases, these systems won't reflect on the consequences of their actions unless they are programmed to do so.

But just because they're highly specialized doesn’t mean they won’t be insanely powerful. These systems will have access to a myriad of resources around them, including the internet, factories, replicators, socially engineered humans, robots that they can control remotely, and much more; this technological outreach will serve as their arms and legs.

Consequently, the great fear of the Singularity stems from the realization that these machine intelligences, which will have processing capacities a significant order of magnitude beyond that of humans, will be able to achieve their pre-programmed goals without difficulty–even if we try to intervene and stop them. This is what has led to the fear of poorly programmed SAI or “malevolent” SAI. If our instructions to these super-expert systems are poorly articulated or under-developed, these machines could pull the old 'earth-into-paperclips' routine.

For those skeptics who don’t see this coming, I implore them to look around. We are beginning to see the opening salvo of the intelligence explosion. We are already creating systems that exceed our capacities and it's a trend that is quickly accelerating. This is a process that started a few decades ago with the advent of computers and other calculating machines, but it’s been in the last little while that we’ve been witness to more profound innovations. Humanity chuckled in collective nervousness back in 1997 when chess grandmaster Garry Kasparaov was defeated by Deep Blue. From that moment on we knew the writing was on the wall, but we’ve since chosen to deny the implications; call it proof-of-concept, if you will, that a Singularity is coming.

More recently, we have developed a machine that can defeat the finest Jeopardy players, and now there’s a AI/robotic system that can play billiards at a high level. You see where this is going, right? We are systematically creating individual systems that will eventually and collectively exceed all human capacities. This can only be described as an intelligence explosion. While we are a far ways off from creating a unified system that can defeat us well-rounded and highly multi-disciplinal humans across all fields, it’s not unrealistic to suggest that such a day is coming.

But that’s beside the point. What’s of concern here is the advent of the super-expert system that works beyond human comprehension and control—the one that takes things a bit too far and with catastrophic results.

Or with good results.

Or with something that we can't even begin to imagine.

We don’t know, but we can be pretty darned sure it’ll be disruptive—if not paradigmatic in scope. This is why it’s called the Singularity. The skeptics and the critics can clench their hands in a fist and stamp their feet all they want about it, but that’s where we find ourselves.

We humans are already lagging behind many of our systems in terms of comprehension, especially in mathematics. Our artifacts will increasingly do things for reasons that we can’t really understand. We’ll just have to stand back and watch, incredulous as to the how and why. And accompanying this will come the (likely) involuntary relinquishment of control.

So, we can nit-pick all we want about definitions, fantasize about creating a god from the machine, or poke fun at the rapture of the nerds.

Or we can start to take this potential more seriously and have a mature and fully engaged discussion on the matter.

So what’s it going to be?

8 comments:

  1. Anonymous2:55 AM

    Hi George,

    It seems like you're envisioning a singularity with distinct machine and human components, wherein the people quickly get surpassed by the technology.

    Is that right, or do you think it's likely that humans themselves will integrate with the technology (via implants or nanobots or something external) such that they remain, if not equal to the technology, at least nearby peers? Wouldn't even super-intelligent programs be able to come up with creative solutions to the problems of integrating man and machine?

    Best,

    John

    ReplyDelete
  2. I can give you three of my own reasons why there won't be a Singularity:

    1. Neurons are not transistors: The impulses traveling down an axon and across a synapse may be discrete, but the impulses coming in through the dendrites and going through the cell body vary immensely in strength. The brain is more like a hybrid system with analog inputs and digital outputs but even then that's a very poor analogy.

    2. Processing power is just speed: Assuming we do succeed in making a human equivalent AI and Moore's law holds out long enough for us to give it considerably more processing power than a human brain, that won't make automatically make it any smarter than a human. It'll just think a lot faster and be much better at multi-tasking. In fact it would probably go insane trying to communicate with us "meat-glaciers".

    3. The brain is still a mystery: We've barely scratched the surface of the thing that gives us consciousness, and most attempts at messing with it have given us poor results.

    And I don't want to hear a word about "seed AI", evolution is too much of a crapshoot. Not to mention too inefficient for most investors.

    ReplyDelete
  3. Anonymous3:47 PM

    This is what I hear: *crickets*

    To extrapolate from chess and Jeopardy victories to some grandiose transcendence of humanity via computers is an insane fallacy. These machines are idiot-savants; they have no consciousness, no creativity, no imagination of any kind.

    The proper use of computers is to construct a Matrix where we can escape from this hostile universe into the Multiverse of our minds. The kind of technology you're talking about leads directly to AI arms races, robot wars and cybernetic apocalypse. Wake up, you fool!

    ReplyDelete
  4. Excellent article, George !

    ReplyDelete
  5. Great article.
    There is an irresistible process driving us toward our future. Like it or not we are heading into the radically unknown in the very near term.

    It is very important that we should be as aware as possible about what is happening and what may happen so that we are able to deal with it and (hopefully) benefit.

    I don't like the term Singularity personally, it tends to foster an image of nerd culture or religion. It is not an event, it is merely a point in time in our near future beyond which it is impossible to predict anything for Humanity. I would highly recommend anyone to do even the most brief of research into this subject. It is important.

    ReplyDelete
  6. I believe there will be a singularity or rather that ultimately we will develop AI represented in software.

    The timescales that are floating around for this occuring seem wildy optimistic to me though. I would love to be wrong about this.

    I think we are not really very good at programming yet, have very little grasp on intelligence and even less knowledge about how we create new knowledge.

    So I ultimately agree we will get there but would probabaly add another 50, 100, 200 years to Kurzweil's projections.

    ReplyDelete
  7. I have a good friend in a related field who works a lot with genetic algorithms, and we've spent a fair amount of time discussing their implementation. I've tended to view it through the lens of FPGA design synthesis, which also generates results too complex for us to understand in a direct way. Basically, both of our experiences have led us to believe that these things are exceedingly difficult to integrate into a fully functional system, and basically never work by accident. Yet, that's how I tend to see George's description of expert systems taking on capabilities. I've never seen an expert-type system that doesn't start to break down very rapidly outside carefully-defined scenarios, and there are a lot of good theoretical reasons to believe that this is not an accident. It would be like saltational speciation in evolution: an evocative idea that keeps coming up, but there's good reasons to believe it has never happened in all of evolutionary history.

    Thus I think that the real "danger" is of humans intentionally integrating expert systems into a proven multi-modal attention and goal management system: the human brain. That's not to say that this would involve veridical neurons; I'm really just claiming that the only near-term-accessible plan for integrating all sorts of new perceptual/inferential capabilities is something at least roughly modeled on the human brain.

    I disagree with ZarPaulus about how much progress we've made toward understanding the brain, and I think humanlike AI is highly achievable in the next 2-3 decades. I also think that such 'human' AI entities could, if developed for military or similar purposes, 'leave out' some of the troublesome parts of brain design that make us recognizably human, producing useful sociopaths and the like. Or they *don't* leave out those parts, but condition those AIs not to see themselves as one of us but as Others. That's my fear.

    But the truly alien intelligences? I just don't see them working well in complex environments without direction from humans to close the design feedback loop, and even then the level of integration would be limited by our own capacity for managing complexity. Maybe a subsequent generation of augmented humans might get into more trouble on this point, but of course I can't speculate much into the other side of the Singularity.

    ReplyDelete
  8. I'm studying AI at the moment from the perspectives of natural and artificial intelligence. One of the concepts I've been particularly taken with is that of emergent behaviour. In a system that that utilises emergent behaviour its extremely difficult if not impossible to understand the minutia of what is going on in the system at any given time, it's unpredictable to a large degree. But interestingly, what becomes clear is that it is the complex emergent behaviour that is important and not the underlying detail. An example is the use of ant colony optimisation in communications network routing.

    The point I'm making is that a brain like system could potentially be constructed with the intention of generating consciousness as an emergent behaviour without our needing to obsess over the possibly incalculable processes taking place. To achieve this we would need a better understanding of those basic processes of the cortex and a shed load more computing power. Both of which are obtainable in the near term.

    ReplyDelete

Note: Only a member of this blog may post a comment.