July 15, 2010

Gelernter's 'dream logic' and the quest for artificial intelligence

Internet pioneer David Gelernter explores the ethereal fuzziness of cognition in his Edge.org article, "Dream-logic, the internet and artificial consciousness." He's right about the imperfect and dream-like nature of cognition and conscious thought; AI theorists should certainly take notice.

But Gelernter starts to go off the rails toward the conclusion of the essay. His claim that an artificial consciousness would be nothing more a zombie mind is unconvincing, as is his contention that emotional capacities are are necessary component of the cognitive spectrum. There is no reason to believe, from a functionalist perspective, that the neural correlates of consciousness cannot take root in an alternative and non-biological medium. And there are examples of fully conscious human beings without the ability to experience emotions.

Gelernter, like a lot of AI theorists, need to brush-up on their neuroscience.

At any rate, here's an excerpt from the article; you can judge the efficacy of his arguments for yourself:
As far as we know, there is no way to achieve consciousness on a computer or any collection of computers. However — and this is the interesting (or dangerous) part — the cognitive spectrum, once we understand its operation and fill in the details, is a guide to the construction of simulated or artificial thought. We can build software models of Consciousness and Memory, and then set them in rhythmic motion.

The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence."

But first there are formidable technical problems. For example: there can be no cognitive spectrum without emotion. Emotion becomes an increasingly important bridge between thoughts as focus drops and re-experiencing replaces recall. Computers have always seemed like good models of the human brain; in some very broad sense, both the digital computer and the brain are information processors. But emotions are produced by brain and body working together. When you feel happy, your body feels a certain way; your mind notices; and the resonance between body and mind produces an emotion. "I say again, that the body makes the mind" (John Donne).

The natural correspondence between computer and brain doesn't hold between computer and body. Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought. In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves. (The solution will probably take the form of software that is "trained" to imitate the emotional responses of a particular human subject.)

One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being.

And then what?

2 comments:

  1. Totally agree. Gelernter starts strong, then concludes weirdly / unconvincingly. Good analysis / systems view, bad final scenarios -- see this often in long-term / AI projections.

    ReplyDelete
  2. He will say i feel happy, and act happy but wont be happy?

    Anybody see the problem?

    What do we do? we Say we are happy, Let´s check how it happend; we interpret happiness as some thought asociated with a body response, the two of whom are reactions to a word, scene, person etc. Then, after that interpretation we think or say, Im happy. Are we happy? The machine reacts, have a (virtual) body reaction and a thought, the AI thinks Im happy. Would you say the AI is not happy?

    I believe there is supernatural reminicense in his interpretation of emotion in his essay.

    http://elfilodelaguadana.tumblr.com
    http://elespaciodeaparicion.wordpress.com

    ReplyDelete

Note: Only a member of this blog may post a comment.