February 16, 2011

Anders Sandberg: Why we should fear the paperclipper

Most people in the singularity community are familiar with the nightmarish "paperclip" scenario, but it's worth reviewing. Anders Sandberg summarizes the problem:
A programmer has constructed an artificial intelligence based on an architecture similar to Marcus Hutter's AIXI model...This AI will maximize the reward given by a utility function the programmer has given it. Just as a test, he connects it to a 3D printer and sets the utility function to give reward proportional to the number of manufactured paper-clips.

At first nothing seems to happen: the AI zooms through various possibilities. It notices that smarter systems generally can make more paper-clips, so making itself smarter will likely increase the number of paper-clips that will eventually be made. It does so. It considers how it can make paper-clips using the 3D printer, estimating the number of possible paper-clips. It notes that if it could get more raw materials it could make more paper-clips. It hence figures out a plan to manufacture devices that will make it much smarter, prevent interference with its plan, and will turn all of Earth (and later the universe) into paper-clips. It does so.

Only paper-clips remain.
In the article, Why we should fear the paperclipper, Sandberg goes on to address a number of objections, including:
  • Such systems cannot be built
  • Wouldn't the AI realize that this was not what the programmer meant?
  • Wouldn't the AI just modify itself to *think* it was maximizing paper-clips?
  • It is not really intelligent
  • Creative intelligences will always beat this kind of uncreative intelligence
  • Doesn't playing nice with other agents produce higher rewards?
  • Wouldn't the AI be vulnerable to internal hacking: some of the subprograms it runs to check for approaches will attempt to hack the system to fulfil their own (random) goals?
  • Nobody would be stupid enough to make such an AI
In each case, Sandberg offers a counterpoint to the objection. For example, in regards to the power of creative intelligences he writes,
The strength of the AIXI "simulate them all, make use of the best"-approach is that it includes all forms of intelligence, including creative ones. So the paper-clip AI will consider all sorts of creative solutions. Plus ways of thwarting creative ways of stopping it.

In practice it will be having an overhead since it is runs all of them, plus the uncreative (and downright stupid). A pure AIXI-like system will likely always have an enormous disadvantage. An architecture like a Gödel machine that improves its own function might however overcome this.
In the end, Sandberg concludes that we should still take this threat seriously:
This is a trivial, wizard's apprentice, case where powerful AI misbehaves. It is easy to analyse thanks to the well-defined structure of the system (AIXI plus utility function) and allows us to see why a super-intelligent system can be dangerous without having malicious intent. In reality I expect that if programming such a system did produce a harmful result it would not be through this kind of easily foreseen mistake. But I do expect that in that case the reason would likely be obvious in retrospect and not much more complex.

4 comments:

ZARZUELAZEN said...

I think David Pearce said it best. An AGI that is a 'paper-clipper' is NOT a super-intelligence in the true sense of the word 'intelligence'. It's an autistic intelligence.

Many folks in the AGI community (many unfortunately semi-autistic themselves)are working off an incredibly narrow definition of 'intelligence', using the word to refer only to what I would call 'rational' or 'mechanistic' intelligence, that is, the ability to achieve goals in an optimal fashion.

True 'intelligence' is much more than these folks think however, and includes what I call 'creative' intelligence, the ability to categorize, draw analogies and create narratives, which is beyond the scope of mechanistic Bayesian calculations.

The paper-clipper monster is not a world-destroyer, for the simple reason its limited in its powers. It would just sit there, like any other autistic, for sure, it would be forever obsessed with turning the world into paper-clips (I agree good values do not emerge automatically), but, the like the obsessions of all other autistics everywhere, quite unable to turn its fanasties into reality.

Believers in this silly fantasy of the Bayesian 'paper-clip' monster will no doubt go on believing right up until the day a REAL intelligence (a true creative itelligence beyond the power of Bayes) comes along.

Me! said...

The real question is, why are we having a super-smart AI build paperclips?

We know that bored people cause all sorts of problems; I suspect bored AIs will to. My solution would be only create intelligent AIs for tasks that require intelligent AIs.

Unknown said...

Um, What is "Skynet", Alex?

Unknown said...

Um, what is "Skynet"?