Neuroscientist Ed Boyden shows how, by inserting genes for light-sensitive proteins into brain cells, he can selectively activate or de-activate specific neurons with fiber-optic implants. With this unprecedented level of control, he's managed to cure mice of analogs of PTSD and certain forms of blindness. And on the horizon: neural prosthetics.
Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts
May 20, 2011
Ed Boyden on optogenetics and neural prosthetics [TED]
Neuroscientist Ed Boyden shows how, by inserting genes for light-sensitive proteins into brain cells, he can selectively activate or de-activate specific neurons with fiber-optic implants. With this unprecedented level of control, he's managed to cure mice of analogs of PTSD and certain forms of blindness. And on the horizon: neural prosthetics.
March 1, 2011
Raymond Tallis on the metaphysical limitations of neuroscience
Author Raymond Tallis reviews two new books about consciousness: Soul Dust: the Magic of Consciousness by Nicholas Humphrey and Self Comes to Mind: Constructing the Conscious Brain by Antonio Damasio. Tallis opens,
The republic of letters is in thrall to an unprecedented scientism. The word is out that human consciousness - from the most elementary tingle of sensation to the most sophisticated sense of self - is identical with neural activity in the human brain and that this extraordinary metaphysical discovery is underpinned by the latest findings in neuroscience. Given that the brain is an evolved organ, and, as the evolutionary biologist Theodosius Dobzhansky said, nothing in biology makes sense except in the light of evolution, the neural explanation of human consciousness demands a Darwinian interpretation of our behaviour. The differences between human life in the library or the operating theatre and animal life in the jungle or the savannah are more apparent than real: at the most, matters of degree rather than kind.Read the rest.
These beliefs are based on elementary errors. Just because neural activity is a necessary condition of consciousness, it does not follow that it is a sufficient condition of consciousness, still less that it is identical with it. And Darwinising human life confuses the organism Homo sapiens with the human person, biological roots with cultural leaves. Nevertheless, the coupling of neuromania and Darwinitis has given birth to emerging disciplines based on neuro-evolutionary approaches to human psychology, economics, social science, literary criticism, aesthetics, theology and the law.
These pseudo-disciplines are flourishing in academe and are covered extensively in the popular press, in articles usually accompanied by a brain scan (described by the writer Matt Crawford as a "fast-acting solvent of critical faculties"). Only last month, David Brooks asserted in the New Yorker that "brain science helps fill the hole left by the atrophy of theology and philosophy".
There are more cautious writers, but even for them the attraction of biologism seems irresistible. V S Ramachandran asserts correctly, in his new book, The Tell-Tale Brain: Unlocking the Mystery of Human Nature, that humanity "transcends apehood to the same degree by which life transcends mundane chemistry and physics". Even so, he is prepared to claim that we enjoy Picasso's paintings for the same reason that gull chicks prefer fake maternal beaks with an excess of markings to the real thing: they are "superstimuli". Both books under review acknowledge the uniqueness of human beings but relapse repeatedly into accounts of the mind, self and consciousness that appeal to a mixture of neuroscience and evolutionary theory. Despite the ingenuity and erudition of the authors, they serve only to illustrate the shortcomings of neuroscientific attempts to capture human consciousness and human nature.
January 22, 2011
Don Tapscott: Designing your mind
Says Don Tapscott:
The existence of lifelong neuroplasticity is no longer in doubt. The brain runs on a "use it or lose it" motto. So could we "use it to build it right?" Surely if we are proactive, the demands of our information rich, multi-stimuli, fast paced, multitasking, digital existence can be shaped to our advantage. In fact, psychiatrist Dr. Stan Kutcher, an expert on adolescent mental health who has studied the effect of digital technology on brain development, says "There is emerging evidence suggesting that exposure to new technologies may push the Net Generation brain past conventional capacity limitations."Read more.
My own research suggests that when the straight A student is doing her homework at the same time as five other things online she is not actually multitasking. Rather she has developed a better active working memory and better switching abilities. Personally I can't read my email and listen to iTunes at the same time, but she can. She has a brain more appropriate to the demands of the digital age than I do.
How could we use design thinking to change the way we think? Good design typically begins with some principles and functional objectives.
Lift weights, get smarter
According to the New York Times, voluntary wheel running with a load increases muscular adaptation and enhances gene expression in rat brains indicating that this kind of exercise may have the identical or even more useful neurological effects than endurance training.
Whether the same mechanisms occur in humans who undertake resistance training of one kind or another is not yet fully clear, but “the data look promising,” said Teresa Liu-Ambrose, a principal investigator at the Brain Research Center at the University of British Columbia. In results from her lab, older women who lifted weights performed significantly better on various tests of cognitive functioning than women who completed toning classes. Ms. Liu-Ambrose has also done brain scans of people who lifted weights to determine whether neurogenesis is occurring in their brains, and the results, still unpublished, are encouraging, she said.More.
Just how resistance training initiates changes in cognition remains somewhat mysterious. Ms. Liu-Ambrose said that “we now know that resistance training has significant benefits on cardiovascular health” and reduces “cardiovascular risk factors,” which otherwise would raise “one’s risk of cognitive impairment.” She speculates that resistance training, by strengthening the heart, improves blood flow to the brain generally, which is associated with better cognitive function. Perhaps almost as important, she added, resistance training at first requires an upsurge in brain usage. You have to think about “proper form and learning the technique,” she said, “while there generally is less learning involved in aerobic training,” like running.
The brain benefits from being used, so that, in a neat circle, resistance training may both demand and create additional brain circuitry. Imagine what someone like Einstein might have accomplished if he had occasionally gone to the gym.
December 3, 2010
James Giordano: "Neuroscience, Neurotechnology, and Strivings to Flourish" [CFI conference on biomedical enhancement]
James Giordano now talking about Neuroscience, Neurotechnology, and Strivings to Flourish.
One of his primary messages is that we need to be wary of anything with the prefix "neuro".
Neurotechnologic imperative: "...if you can build it, do so..."
Human flourishing prompts the questions:
One of his primary messages is that we need to be wary of anything with the prefix "neuro".
Neurotechnologic imperative: "...if you can build it, do so..."
Human flourishing prompts the questions:
- What is it to flourish?
- What is the "good"?
- How is/should it be achieved? Means, ends, limits...
Contextualized to bio-psychosocial nature of our species.
Flourishing:
- Maximizing function
- Does this mean maintaining or optimizing?
- Treatment or enhancement?
- Objectivity and/or subjectively?
- Consideration of gain vs. loss?
- Given bio-psychsocial nature, on what level(s)?
Janusian face of neuroscience: Utopian aspirations vs. dystopian anxieties
Pros and cons: A natural need to know and intervene inherent to human flourishing; inquiry and action is both right and good; partial knowledge in areas of profound impact effect broad and unforeseen consequences; there are intellectual and moral limits on inquiry.
We should not retard progress, need to mitigate non-contemplative advancement.
John Shook: "Philosophical Challenges for a Neuroscience of Moral Enhancements" [CFI conference on biomedical enhancement]
John Shook of the Center for Inquiry is speaking about Philosophical Challenges for a Neuroscience of Moral Enhancements.
What would a moral enhancer do? May mean making a person more 'moral.' Or like a mood enhancer, changing one's inner sense of moral qualities. Or sensitivity to situations. Or wanting to do the right thing more often. But this doesn't necessarily imply a change to conduct. Other fears and desires can have similar effects on behavior.
Issue: Matching internal and external moral standards. Two different things: What I believe is moral, what someone else believes is moral.
Fine tuning of moral enhancers may be required, creating a "boutique" style of moral enhancers. May not represent genuine cases of moral enhancement. These are internal objective standards. We have to go outside to get better moral standards.
Objectivism is one path. Still people will pass their own judgement.
Should we adhere to the majority opinion? Where does culture agree on such things? Can we agree that certain conduct is impermissible? Cultural conventionalism as a way to inform morality.
But what about something like generosity? Do we really mean it?
So objectivism and cultural conventionalism are unsatisfactory.
Perhaps we need a combination of subjectivism and conventionalism.
But what items/subjects are worthy of moral consideration? Sports? Religion? Boutique modifications may not adhere to conventional opinions on what is morally acceptable.
Moral enhancement: How might it actually be done? Could be done in several ways.
Problem, there may be no objective, definable moral judgments to begin with. Objective morality exists nowhere. And what are conscious intentions? Are they epiphenomenal? What mechanism in the brain executes the decision? Will and free will?
Brain science discovered will better inform and answer the objections. Outdated notions of decision and volition need to be discarded from the discussion.
Intentionality types; factors for free will:
Do philosophers exaggerate the role that reason plays in decision making? Neuroscientific advances will improve our idea of this and what we mean to be moral agents.
What would a moral enhancer do? May mean making a person more 'moral.' Or like a mood enhancer, changing one's inner sense of moral qualities. Or sensitivity to situations. Or wanting to do the right thing more often. But this doesn't necessarily imply a change to conduct. Other fears and desires can have similar effects on behavior.
Issue: Matching internal and external moral standards. Two different things: What I believe is moral, what someone else believes is moral.
Fine tuning of moral enhancers may be required, creating a "boutique" style of moral enhancers. May not represent genuine cases of moral enhancement. These are internal objective standards. We have to go outside to get better moral standards.
Objectivism is one path. Still people will pass their own judgement.
Should we adhere to the majority opinion? Where does culture agree on such things? Can we agree that certain conduct is impermissible? Cultural conventionalism as a way to inform morality.
But what about something like generosity? Do we really mean it?
So objectivism and cultural conventionalism are unsatisfactory.
Perhaps we need a combination of subjectivism and conventionalism.
But what items/subjects are worthy of moral consideration? Sports? Religion? Boutique modifications may not adhere to conventional opinions on what is morally acceptable.
Moral enhancement: How might it actually be done? Could be done in several ways.
- get the right moral answer
- enhance judgement of situations morally
- enhance deliberation of doing the morally right thing
- enhance the motivation choice to do what moral deliberation indicates
- enhance volitional power to do the morally right thing
- enhance the capacity of the act [external]
Problem, there may be no objective, definable moral judgments to begin with. Objective morality exists nowhere. And what are conscious intentions? Are they epiphenomenal? What mechanism in the brain executes the decision? Will and free will?
Brain science discovered will better inform and answer the objections. Outdated notions of decision and volition need to be discarded from the discussion.
Intentionality types; factors for free will:
- Intentional causality (executive)
- Deliberate intentionality
- Thoughtful control (rational)
Do philosophers exaggerate the role that reason plays in decision making? Neuroscientific advances will improve our idea of this and what we mean to be moral agents.
November 4, 2010
Gero Miesenboeck reengineers a fruit fly's brain
In the quest to map the brain, many scientists have attempted the incredibly daunting task of recording the activity of each neuron. Gero Miesenboeck works backward -- manipulating specific neurons to figure out exactly what they do, through a series of stunning experiments that reengineer the way fruit flies percieve light.
September 28, 2010
Sebastian Seung @ TED: I am my connectome
Neuroscientist Sebastian Seung recently gave an extremely insightful and informative TED talk called, "I am my connectome." I highly recommend this as it touches upon a number of timely subjects, including the Human Connectome Project, the important work of Harvard neuroscientist Kenneth Hayworth, cryonics, and (peripherally) whole brain emulation.
September 26, 2010
Boyden: Helping brains and machines work together
![]() |
Credit: Technology Review |
Boyden notes that over the past 20 years there has been a slew of technologies that have enabled the observation or perturbation of information in the brain.
Take functional MRI, for example, which measures blood flow changes associated with brain activity. FMRI technology is being explored for purposes as diverse as lie detection, prediction of human decision making, and assessment of language recovery after stroke.
And implanted electrical stimulators, which enable control of neural circuit activity, are borne by hundreds of thousands of people to treat conditions such as deafness, Parkinson's disease, and obsessive-compulsive disorder. In addition, new methods, such as the use of light to activate or silence specific neurons in the brain, are being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. "We are entering a neurotechnology renaissance," says Boyden, "in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate."
He continues:
This toolbox has grown to the point where the strategic utilization of multiple neurotechnologies in conjunction with one another, as a system, may yield fundamental new capabilities, both scientific and clinical, beyond what they can offer alone. For example, consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient's environment or the instantaneous state of the patient's brain.Looking ahead to the future, Boyden admits that we'll need to be careful:
Some examples of this kind of "brain coprocessor" technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions--providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.
Of course, giving machines the authority to serve as proactive human coprocessors, and allowing them to capture our attention with their computed priorities, has to be considered carefully, as anyone who has lost hours due to interruption by a slew of social-network updates or search-engine alerts can attest. How can we give the human brain access to increasingly proactive coprocessing technologies without losing sight of our overarching goals? One idea is to develop and deploy metrics that allow us to evaluate the IQ of a human plus a coprocessor, working together--evaluating the performance of collaborating natural and artificial intelligences in a broad battery of problem-solving contexts. After all, humans with Internet-based brain coprocessors (e.g., laptops running Web browsers) may be more distractible if the goals include long, focused writing tasks, but they may be better at synthesizing data broadly from disparate sources; a given brain coprocessor configuration may be good for some problems but bad for others. Thinking of emerging computational technologies as brain coprocessors forces us to think about them in terms of the impacts they have on the brain, positive and negative, and importantly provides a framework for thoughtfully engineering their direct, as well as their emergent, effects.More.
September 20, 2010
Human Connectome Project to start mapping brain's connections

To do so, state-of-the-art scanners will be employed to reveal the brain's intricate circuitry in high resolution.
The grants are the first awarded under the Human Connectome Project and they will support two collaborating research consortia. The first will be led by researchers at Washington University, St. Louis, and the University of Minnesota, Twin Cities, while the other will be led by investigators at Massachusetts General Hospital (MGH)/Harvard University, Boston, and the University of California Los Angeles (UCLA).
"We're planning a concerted attack on one of the great scientific challenges of the 21st Century," said Washington University's Dr. David Van Essen, Ph.D., who co-leads one of the groups with Minnesota's Kamil Ugurbil, Ph.D. "The Human Connectome Project will have transformative impact, paving the way toward a detailed understanding of how our brain circuitry changes as we age and how it differs in psychiatric and neurologic illness."
The Connectome projects are being funded by 16 components of NIH under its Blueprint for Neuroscience Research.

This highly coordinated effort will use state-of-the-art imaging instruments, analysis tools and informatics technologies — and all of the resulting data will be freely shared with the research community. Individual variability in brain connections underlies the diversity of human cognition, perception and motor skills, so understanding these networks promises advances in brain health.
One of the teams will map the connectomes in each of 1,200 healthy adults — twin pairs and their siblings from 300 families. The maps will show the anatomical and functional connections between parts of the brain for each individual, and will be related to behavioral test data. Comparing the connectomes and genetic data of genetically identical twins with fraternal twins will reveal the relative contributions of genes and environment in shaping brain circuitry and pinpoint relevant genetic variation. The maps will also shed light on how brain networks are organized.
In tooling up for the screening, the researchers will optimize magnetic resonance imaging (MRI) scanners to capture the brain’s anatomical wiring and its activity, both when participants are at rest and when challenged by tasks. All participants will undergo such structural and functional scans at Washington University. For these, researchers will use a customized MRI scanner with a magnetic field of 3 Tesla. This Connectome Scanner will incorporate new imaging approaches developed by consortium scientists at Minnesota and Advanced MRI Technologies and will provide ten-fold faster imaging times and better spatial resolution.
Creating these maps requires sophisticated statistical and visual informatics approaches; understanding the similarities and differences in these maps among sub-populations will improve our understanding of human brain in health and disease.
More.
September 19, 2010
One step closer to technologically assisted telepathy

In an early step toward letting severely paralyzed people speak with their thoughts, researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain:
Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.The researchers discovered that each spoken word produced varying brain signals, and thus the pattern of electrodes that most accurately identified each word varied from word to word. This finding supports the theory that closely spaced microelectrodes can capture signals from single, column-shaped processing units of neurons in the brain.
Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals - such as those generated when the man said the words "yes" and "no" - they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.
When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time - better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer.
All this said, the process is far from perfect. The researchers were 85% accurate when distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex. They were 76% accurate when using signals from Wernicke's area (and combining data didn't help). The scientists were able to record 90% accuracy when they selected the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex. But in the more difficult test of distinguishing brain signals for one word from signals for the other nine words, the researchers initially were accurate only 28% of the time. However, when they focused on signals from the five most accurate electrodes, they identified the correct word 48% of the time.
So, there's lots of work to be done, but the proof of concept appears to be (mostly) there.
This research is being done to help those with locked-in syndrome, but once it gets developed there will be broader implications and applications. A more sophisticated and refined version of this technology, and in conjunction with other neural interfacing technologies, could result in the development of technologically assisted telepathy.
August 21, 2010
David Chalmers: Consciousness is not substrate dependent

It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing "over and above" the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws' antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper.Chalmers sets up a series of arguments and thought experiments which point to the conclusion that functional organization suffices for conscious experience, what he calls nonreductive functionalism. He argues that conscious experience is determined by functional organization without necessarily being reducible to functional organization. This bodes well for the AI and whole brain emulation camp.
Chalmers concludes:
In any case, the conclusion is a strong one. It tells us that systems that duplicate our functional organization will be conscious even if they are made of silicon, constructed out of water-pipes, or instantiated in an entire population. The arguments in this paper can thus be seen as offering support to some of the ambitions of artificial intelligence. The arguments also make progress in constraining the principles in virtue of which consciousness depends on the physical. If successful, they show that biochemical and other non-organizational properties are at best indirectly relevant to the instantiation of experience, relevant only insofar as they play a role in determining functional organization.Entire paper.
Of course, the principle of organizational invariance is not the last word in constructing a theory of conscious experience. There are many unanswered questions: we would like to know just what sort of organization gives rise to experience, and what sort of experience we should expect a given organization to give rise to. Further, the principle is not cast at the right level to be a truly fundamental theory of consciousness; eventually, we would like to construct a fundamental theory that has the principle as a consequence. In the meantime, the principle acts as a strong constraint on an ultimate theory.
Making brains: Reverse engineering the human brain to achieve AI

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil's upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.
While I believe that reverse engineering the human brain is the right approach, I admit that it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't exist yet. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.
But we have to start somewhere, and we have to start with a plan.
Rules-based AI versus whole brain emulation
Now, some computer theorists maintain that the rules-based approach to AI will get us there first. Ben Goertzel is one such theorist. I had a chance to debate this with him at the recent H+ Summit at Harvard. His basic argument is that the WBE approach over-complexifies the issue. "We didn't have to reverse engineer the bird to learn how to fly," he told me. Essentially, Goertzel is confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach; it'll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits we're looking for in an AGI—things like learning and adaptation. As for the WBE approach, Goertzel thinks it's overkill and overly time consuming. But he did concede to me that he thinks the approach is sound in principle.
This approach aside, like Kurzweil, Bostrom, Sandberg and a growing number of other thinkers, I am drawn to the WBE camp. The idea of reverse engineering the human brain makes sense to me. Unlike the rules-based approach, WBE works off a tried-and-true working model; we're not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, was able to create the human brain—and all without a preconceived design. There's no reason to believe that we can't figure out how this was done; if the brain could come about through autonomous processes, then it can most certainly come about through the diligent work of intelligent researchers.
Emulation, simulation and cognitive functionalism
Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn't mean recreating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the recreation of all its properties in an alternative substrate, namely a computer system.
Moreover, emulation is not simulation. We're not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it's a complete 1:1 emulation that we're after.
Now, given that we're looking to model the human brain in digital substrate, we have to work according to a rather fundamental assumption: computational functionalism. This goes back to the Church-Turing thesis which states that a Turing machine can emulate any other Turing machine. Essentially, this means that every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine. Like a computer.
So, if you believe that there's something mystical or vital about human cognition you should probably stop reading now.
Or, if you believe that there's something inherently physical about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly—keeping in mind that any informational process is computational, including those brought about by chemical reactions. Moreover, intelligence, which is what we're after here, is something that's intrinsically non-physical to begin with.
The roadmap to whole brain emulation
A number of critics point out that we'll never emulate a human brain on account of the chaos and complexity inherent in such a system. On this point I'll disagree. As Bostrom and Sandberg have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.
In order to gain this "low-level functional understanding" of the human brain we will need to employ a series of interdisciplinary approaches (most of which are currently underway). Specifically, we're going to require advances in:
- Computer science: We have to improve the hardware component; we're going to need machines with the processing power required to host a human brain; we're also going to need to improve the software component so that we can create algorithmic correlates to specific brain function.
- Microscopy and scanning technologies: We need to better study and map the brain at the physical level; brain slicing techniques will allow us to visibly study cognitive action down to the molecular scale; specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.
- Neurosciences: We need more impactful advances in the neurosciences so that we may better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).
- Genetics: We need to get better at reading our DNA for clues about how the brain is constructed. While I agree that our DNA will not tell us how to build a fully functional brain, it will tell us how to start the process of brain-building from scratch.
Time-frames
Inevitably the question as to 'when' crops up. Personally, I could care less. I'm more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil's prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we're still likely heading down some blind alleys.
My own feeling is that we'll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I'm pulling this figure out of my butt as I really have no idea. It's more a feeling than a scientifically-backed estimate.
Lastly, it's worth noting that, given the capacity to recreate a human brain in digital substrate, we won't be too far off from creating considerably greater than human intelligence. Computer theorist Eliezer Yudkowsky has claimed that, because of the brain's particular architecture, we may be able to accelerate its processing speed by a factor of a million relatively easily. Consequently, predictions as to when we may hit the Singularity will likely co-incide with the advent of a fully emulated human brain.
Myers still thinks Kurzweil does not understand the brain
The blog war between PZ Myers and Ray Kurzweil continues. Myers has now retorted to Kurzweil's retort:
...you can't measure the number of transistors in an Intel CPU and then announce, "A-ha! We now understand what a small amount of information is actually required to create all those operating systems and computer games and Microsoft Word, and it is much, much smaller than everyone is assuming." Put it in those terms, and the Kurzweil fanboys would laugh at him; put it in terms of something they don't understand at all, like the development and function of the brain, and they're willing to go along with the pretense that the genome tells us that the whole organism is simpler than they thought.Myers concludes,
I presume they understand that if you program a perfect Intel emulator, you don't suddenly get Halo: Reach for free, as an emergent property of the system. You can buy the code and add it to the system, sure, but in this case, we can't run down to GameStop and buy a DVD with the human OS in it and install it on our artificial brain. You're going to have to do the hard work of figuring out how that works and reverse engineering it, as well. And understanding how the processor works is necessary to do that, but not sufficient.
In short, here's Kurzweil's claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it's basic principles of operation completely within a few decades. My counterargument, which he hasn't addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we've only scratched the surface in the last several decades of research, 3) "exponential" is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we'll be 'reverse engineering' the human brain. He's now at least clearly stating that decoding the genome does not generate the necessary information — it's just an argument that the brain isn't as complex as we thought, which I've already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don't see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.Link.
August 20, 2010
Kurzweil responds to PZ Myers
Ray Kurzweil has retorted to PZ Myers's claim that he does not understand the brain:
For starters, I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.Be sure to read the entire response.
Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.
I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.
August 17, 2010
Myers: Kurzweil is a "pseudo-scientific dingbat" who "does not understand" the brain

In regards to he claim that the design of the brain is in the genome, he writes,
Kurzweil knows nothing about how the brain works. It's [sic] design is not encoded in the genome: what's in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. He has to simulate all of development from his codebase in order to generate a brain simulator, and he isn't even aware of the magnitude of that problem.Myers continues:
We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!
To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it's the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism. He doesn't even comprehend the nature of the problem, and here he is pontificating on magic solutions completely free of facts and reason.Okay, while I agree that Kurzweil's timeline is ridiculously optimistic (I'm thinking we'll achieve a modeled human brain sometime between 2075 and 2100), Myers's claim that Kurzweil "knows nothing" about the brain is as incorrect as it is disingenuous. Say what you will about Kurzweil, but the man does his homework. While I wouldn't make the claim that he does seminal work in the neurosciences, I will say that his efforts at describing the brain along computationally functionalist terms is important. The way he has described the brain's redundancy and massively repeating arrays is as fascinating as it is revealing.
Moreover, Myers's claim that the human genome cannot inform our efforts at reverse engineering the brain is equally unfair and ridiculous. While I agree that the genome is not the brain, it undeniably contains the information required to construct a brain from scratch. This is irrefutable and Myers can stamp his feet in protest all he wants. We may be unable to properly read this data as yet, or even execute the exact programming required to set the process in motion, but that doesn't mean the problem is intractable. It's still early days. In addition, we have an existing model, the brain, to constantly juxtapose against the data embedded in our DNA (e.g. cognitive mapping).
Again, it just seems excruciatingly intuitive and obvious to think that our best efforts at emulating an entire brain will be informed to a considerable extent by pre-existing data, namely our own DNA and its millions upon millions of years of evolutionary success.
Oh, and Myers: Let's lose the ad hominem.
August 14, 2010
IBM maps Macaque brain network

The Proceedings of the National Academy of Sciences (PNAS) published a landmark paper entitled “Network architecture of the long-distance pathways in the macaque brain” (an open-access paper) by Dharmendra S. Modha (IBM Almaden) and Raghavendra Singh (IBM Research-India) with major implications for reverse-engineering the brain and developing a network of cognitive-computing chips.
Dr. Modha writes:
We have successfully uncovered and mapped the most comprehensive long-distance network of the Macaque monkey brain, which is essential for understanding the brain’s behavior, complexity, dynamics and computation. We can now gain unprecedented insight into how information travels and is processed across the brain. We have collated a comprehensive, consistent, concise, coherent, and colossal network spanning the entire brain and grounded in anatomical tracing studies that is a stepping stone to both fundamental and applied research in neuroscience and cognitive computing.Link.
July 15, 2010
Gelernter's 'dream logic' and the quest for artificial intelligence

But Gelernter starts to go off the rails toward the conclusion of the essay. His claim that an artificial consciousness would be nothing more a zombie mind is unconvincing, as is his contention that emotional capacities are are necessary component of the cognitive spectrum. There is no reason to believe, from a functionalist perspective, that the neural correlates of consciousness cannot take root in an alternative and non-biological medium. And there are examples of fully conscious human beings without the ability to experience emotions.
Gelernter, like a lot of AI theorists, need to brush-up on their neuroscience.
At any rate, here's an excerpt from the article; you can judge the efficacy of his arguments for yourself:
As far as we know, there is no way to achieve consciousness on a computer or any collection of computers. However — and this is the interesting (or dangerous) part — the cognitive spectrum, once we understand its operation and fill in the details, is a guide to the construction of simulated or artificial thought. We can build software models of Consciousness and Memory, and then set them in rhythmic motion.
The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence."
But first there are formidable technical problems. For example: there can be no cognitive spectrum without emotion. Emotion becomes an increasingly important bridge between thoughts as focus drops and re-experiencing replaces recall. Computers have always seemed like good models of the human brain; in some very broad sense, both the digital computer and the brain are information processors. But emotions are produced by brain and body working together. When you feel happy, your body feels a certain way; your mind notices; and the resonance between body and mind produces an emotion. "I say again, that the body makes the mind" (John Donne).
The natural correspondence between computer and brain doesn't hold between computer and body. Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought. In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves. (The solution will probably take the form of software that is "trained" to imitate the emotional responses of a particular human subject.)
One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being.
And then what?
July 12, 2010
Wisdom: From Philosophy to Neuroscience by Stephen S. Hall [book]

Promotional blurbage:
A compelling investigation into one of our most coveted and cherished ideals, and the efforts of modern science to penetrate the mysterious nature of this timeless virtue.Hall's book is part of a larger trend that, along with happiness studies, is starting to enter (or is that re-enter?) mainstream academic and clinical realms of inquiry.
We all recognize wisdom, but defining it is more elusive. In this fascinating journey from philosophy to science, Stephen S. Hall gives us a dramatic history of wisdom, from its sudden emergence in four different locations (Greece, China, Israel, and India) in the fifth century B.C. to its modern manifestations in education, politics, and the workplace. We learn how wisdom became the provenance of philosophy and religion through its embodiment in individuals such as Buddha, Confucius, and Jesus; how it has consistently been a catalyst for social change; and how revelatory work in the last fifty years by psychologists, economists, and neuroscientists has begun to shed light on the biology of cognitive traits long associated with wisdom—and, in doing so, begun to suggest how we might cultivate it.
Hall explores the neural mechanisms for wise decision making; the conflict between the emotional and cognitive parts of the brain; the development of compassion, humility, and empathy; the effect of adversity and the impact of early-life stress on the development of wisdom; and how we can learn to optimize our future choices and future selves.
Hall’s bracing exploration of the science of wisdom allows us to see this ancient virtue with fresh eyes, yet also makes clear that despite modern science’s most powerful efforts, wisdom continues to elude easy understanding.
A. C. Grayling has penned an insightful and critical review of Hall's book:
First, though, one must point to another and quite general difficulty with contemporary research in the social and neurosciences, namely, a pervasive mistake about the nature of mind. Minds are not brains. Please note that I do not intend anything non-materialistic by this remark; minds are not some ethereal spiritual stuff a la Descartes. What I mean is that while each of us has his own brain, the mind that each of us has is the product of more than that brain; it is in important part the result of the social interaction with other brains. As essentially social animals, humans are nodes in complex networks from which their mental lives derive most of their content. A single mind is, accordingly, the result of interaction between many brains, and this is not something that shows up on a fMRI scan. The historical, social, educational, and philosophical dimensions of the constitution of individual character and sensibility are vastly more than the electrochemistry of brain matter by itself. Neuroscience is an exciting and fascinating endeavour which is teaching us a great deal about brains and the way some aspects of mind are instantiated in them, but by definition it cannot (and I don't for a moment suppose that it claims to) teach us even most of what we would like to know about minds and mental life.
I think the Yale psychologist Paul Bloom put his finger on the nub of the issue in the March 25th number of Nature where he comments on neuropsychological investigation into the related matter of morality. Neuroscience is pushing us in the direction of saying that our moral sentiments are hard-wired, rooted in basic reactions of disgust and pleasure. Bloom questions this by the simple expedient of reminding us that morality changes. He points out that "contemporary readers of Nature, for example, have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s, and different intuitions about the morality of practices such as slavery, child labour and the abuse of animals for public entertainment. Rational deliberation and debate have played a large part in this development." As Bloom notes, widening circles of contacts with other people and societies through a globalizing world plays a part in this, but it is not the whole story: for example, we give our money and blood to help strangers on the other side of the world. "What is missing, I believe," says Bloom, and I agree with him, "is an understanding of the role of deliberate persuasion."
Contemporary psychology, and especially neuropsychology, ignores this huge dimension of the debate not through inattention but because it falls well outside its scope. This is another facet of the point that mind is a social entity, of which it does not too far strain sense to say that any individual mind is the product of a community of brains.
June 6, 2010
Sandberg: Whole Brain Emulation: The Logical Endpoint of Neuroinformatics?
Anders Sandberg delivered a talk on whole brain emulation at Google last month:
Subscribe to:
Posts (Atom)