April 4, 2004

Peter Passaro on AI, the Brain, and Techlepathy

Neuroscientist Peter Passaro will be presenting at TV04. Passaro will most likely be speaking about the logistics of consciousness uploading and how difficult it will be to achieve such a feat. Peter and I got talking about his work and he mentioned that he had some interesting new things to say about the brain, how it works and it's capacities. Here's our correspondence:

Me:
[W]ow -- I'd really like to hear [more of] your thoughts. Out of curiosity, do you take issue with Moravec's 10X16 IPS, or is that the wrong angle to take in both comprehending and analogizing human cognition? Also, are you also describing the computation required for consciousness-proper or just the overall raw computing power of the brain?

Passaro:
I'm compiling data to write an academic article on the complexity and intelligence issue. Here is my quick and dirty argument. I don't have a problem with Moravec's estimates of raw computing power, but I do have a problem with his estimate of memory, and a very big problem with the lack of discussion of specified complexity.

The standard measure of memory capacity used by AI/GI researchers is that one synapse equals one bit. I don't buy this at all (and neither does anyone in the neuro community) and I don't think we have a good enough handle on computation in small mammalian brain circuits to really understand memory storage completely, but there are good reasons to think that the estimates should be much much larger because you just can't equate a synapse with a bit.

At the heart of this argument is that specified complexity is what is determining the very special type of computation we see in brains. The current estimates of the number of synapses in a human brain is ~100 Trillion. The big deal here is in the way you can organize those 10^14 synapses. There is a recent paper that used a simple N choose R mathematical argument showing that the possible ways you could connect the neurons in the brain using 100 billion neurons and 1000 synapses/neuron is over 10^8000(bits)!!!!! Realize this is an upper bound and a ridiculously hyperastronomical number, but I still think this is a wake up call that we are dealing with an order of complexity unseen any where else in nature (the human brain is hyperastronomical in its *dynamics* and *degree of connectivity*!)

I think it likely we already have devices which have the requisite computational power in terms of speed, but I think we are far away from having the required level of connectivity and speed of altering that connectivity between information processing units (transistors or whatever fundamental substrate you are using) to produce the type of dynamics we see in neural systems. I think the hot areas for AI/GI on the software end should be network and distributed systems research and on the hardware end would be devices such as FPGAs which can quickly alter their structure (current thinking in AI/GI usually ignores the speed at which biological networks rearrange themselves as well).

Lastly we need to figure out how to arrange all this stuff in patterns that are organized to take advantage of information flows. This is what I meant by *specified* complexity. We have to figure out how things are arranged to produce the type of dynamics we see in brains and why I think you still need to do neuroscience to get to AI/GI. Evolutionary computing may be helpful in this regard as well (allowing the system to evolve through the possible space of structures), but neuroscience reverse engineering methods are going to help greatly reduce the search space of good computational structures for intelligence and consciousness. (part of the reason I do the research I do). So much for the quick and dirty :)

Me:
Peter, a few things:

- Yes, I love [Ben] Goertzel's work. I'm fairly convinced that his group will continually be on the cutting edge of AI/GI work for years to come.

- Wow, your figures on the upper-bound capacity of the brain's processing power is staggering to say the least. If you're right, this could significantly push back projections for the development of human-equivalent AI/GI, and even the hypothesized Singularity. You also rightly point out how this further reveals the complexity problem with the human brain. I will continue to mull over this potential discovery.

- Since your encounter with Stuart Hameroff last year [at TransVision 2003], have you considered quantum computation and/or quantum effects in your consciousness studies? Do you still believe that consciousness is an emergent property?

- In your work with neural interfaces, have you given any consideration to technologically endowed telepathy? If so, how do you see it being done? On this subject, check out [the] recent correspondence I had with Chuck Jorgensen of NASA's Ames Research Center.

Passaro:
I think that this whole competition over who will get there first, AI or IA, is just ridiculous. If we can create interactive systems in which machine intelligence learns to create better computational structures by observing biological neuronal networks, and uses that info create denser linkages to biological networks to learn more, we could be seeing progress on both fronts much faster. A self-teaching system for intelligence integration. In my mind the search for general intelligence should be one field. I think Ben may be one of the few AI researchers I'm aware of who would be sympathetic to this view.

The complexity figures are making me think that we seriously need to think about moving towards evolved systems (I know many of the GI gang are thinking in this direction) if we want AI/GI anytime soon and for my own research that I am really going to need the assistance of adaptable machine intelligence to deal with type of large scale processing I am attempting to study in biological neural systems.

I have given a lot more thought to my conversation with Hameroff. To be frank, I was not at all impressed by his argument and especially shocked to find he knew very little about how people in neuroscience mathematically model neurons or networks of them. It confirmed the impression from the publications that this group really just did not know what they are talking about. They are on to something about fast microtubule switching being involved in neuronal transmission of information, but this doesn't have anything to do with the quantum or consciousness (except at the fundamental level of single neuron information processing).

What has really convinced me the quantum processing is not involved in consciousness is the complexity scaling issue. You see certain transitions in the formation of more complex structures in the universe quarks->particles->atomic matter->solar systems->life->consciousness. The reach of the quantum world just does not go very high up that scale as far as information transmission due to the fact that there is just not enough capacity for organization at the level of the quantum, everything gets probabilistically filtered before you get to atomic matter. You have to go up several levels of organizational complexity before reaching structures which can be dynamically organized to produce something on the order of human intelligence (and these structures are probabilistically filtering lots of information from lots of neurons, which are probabilistic filtering lots of information from ion and molecule flows... etc.)

On "telepathic" interfaces - YES! I have given this some thought on a number of occasions. I have often thought that the killer app for public acceptance of nonmedical neural implantation is the implantable cell phone. Technologically, I think it is doable NOW! This combination of subvocalized output along with input through something like a cochlear implant is exactly the device I was imagining. I have seriously played around with business plans for how to get this done, and if no one does it before I graduate this is one of the directions I may try to head. The only thing that bothers me is I know who the initial target market would be and major funding source would be - DARPA - which kind of turns my stomach a bit. They would love to have SEAL teams, and fighter pilots, and intelligence operatives running around with these things. I would hate to see this technology classified and restricted for their use only. Imagine how effective a small team of people who could communicate like this would be. The only major technological problem is transmission at a distance, which requires significant power that you want to get away from biological tissue, but these issues are already being dealt with by the cell phone industry and should have workable solutions soon.

BTW, the next step up in these devices would be to go after the part of the motor cortex responsible for output for the region of the throat and mouth or slightly higher up, the language output centers, in a similar fashion to what Nicolelis has done for arm movements, i.e. you wouldn't need any muscular movement at all, just thought, to produce language output. This could also be done with current microelectrode technology.

The mappings Chuck is talking about are exactly the same type of thing we are trying to correlate within our system, but because we are working in an in vitro system with no native structure we are trying to determine general rules for how systems set up in response to sensory input and what the state space of their output will be. Once these rules are determined it will become much easier to produce cortical implants. I am becoming more and more convinced that we dont need to be directly interfaced to lots of single neurons to get information out we just need an array of listening electrodes. Putting information in is going to be more difficult because no one is sure how to use extracellular field stimulation to get information into cortical neural networks except in the simplest of cases, luckily cochlear information is the simplest of cases :)

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.