February 15, 2011

When computers exceed our ability to understand how the hell they do the things they do

Which would be pretty much now.

Great quote from David Ferrucci, the Lead Researcher of IBM's Watson Project:
"Watson absolutely surprises me. People say: 'Why did it get that one wrong?' I don't know. 'Why did it get that one right?' I don't know."
Essentially, the IBM team came up with a whole whack of fancy algorithms and shoved them into Watson. But they didn't know how these formulas would work in concert with each other and result in emergent effects (i.e. computational cognitive complexity). The result is the seemingly intangible, and not always coherent, way in which Watson gets questions right—and the ways in which it gets questions wrong.

As Watson has revealed, when it errs it errs really badly.

This kind of freaks me out a little. When asking computers questions that we don't know the answers to, we aren't going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don't know the answer ourselves, and because we don't necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.

Looking even further ahead, it's becoming painfully obvious that any complex system that is even remotely superior (or simply different) relative to human cognition will be largely unpredictable. This doesn't bode well for our attempts to engineer safe, comprehensible and controllable super artificial intelligence.

8 comments:

Unknown said...

And what happens when Watson starts asking his(?)/her(?)its(?) own questions? Or witholding information for reasons unknown to us? I'm sure the technology isn't that advanced yet, but it's something to consider.

John said...

Well explaining how he arrived at the answer isn't part of the 'requirement spec' for Watson, but I would imagine this could be added in. So in addition to the answer, he (it?) could walk through the methods he used to arrive at it. I found these references, I assumed this meaning for an ambiguous phrase, made the following connections, etc...

Unknown said...

As with all machines the answer is: "when it errs it errs really badly."
That means that it doesn't have any common sense... OTOH many people do not have it as well.

Keith_Indy said...

I'd be interested to see if two copies of Watson would give the same answer to a question, and would the arrive at the answer the same way...

Martin Andersen said...

Sure Watson gets the wrong answer sometimes, but it's much better than humans. So if you had such a machine, you could be pretty certain that you couldn't get better answers anywhere. That must count for something..

Anonymous said...

Watson is just a search engine with a matching server farm, so of course it'll find answers faster due to its extensive database.

If the term "AI" is used here, I think it should be used very loosely.

Yamara said...

And why were all those paper clips piling up on the Jeopardy set?

Interstellar Bill said...

As with Deep Blue, this is a case of intelligent programmers imitating human thought, not duplicating it. The machine itself only runs a huge program and intrinsically has no more 'intelligence' than an answering machine.

As such massive computation becomes commonplace over the next decade or two, we will come to understand that it will never do more than supplement human intelligence, being incapable of surpassing it.

It's ironic that our discussion of AI is shielded from bots by an anti-Turing test, namely those wiggly word-verification characters we so easily read. By betting on the impossibility of AI, the inventor of this clever method is actually the first person to ever make money on AI!