August 30, 2010

xkcd: Exoplanets

Yep, that's about right: Amazing discoveries that are now totally taken for granted.

Five ways to well-being [life extension]

There's no question that our sense of well-being is a significant contributor to our overall longevity. While it may or may not impact directly on aging, it most certainly influences the ways in which we engage in life and with others—and that most certainly impacts on our mental and physical health.

But the idea that we can actually choose to be happy is largely rejected in our society; much of Western culture is rooted in the idea that externalities control our mood and that we as individuals are merely reacting to either negative or positive stimuli. That's why the media constantly hammers us with the message that purchasing a next generation iGadget will unlock our happiness.

Fortunately, we have more control over our happiness than we think.

Back in 2008, the New Economics Foundation was commissioned by the UK Government’s Foresight Project on Mental Capital and Well-being to review the inter-disciplinary work of over 400 scientists from across the world. The aim was to identify a set of evidence-based actions to improve well-being, which individuals would be encouraged to build into their daily lives.

The NEF came up with five evidence-based ways to well-being:
  1. Connect: Make an effort to be social, whether it be with friends, colleagues or neighbors
  2. Be Active: Make an effort to be more physical, whether it be walking, running, cycling, dancing, whatever
  3. Take notice: Stop sleepwalking and start being curious, inquisitive, and mindful; savour the little things
  4. Keep learning: Learn a new skill, rediscover an old hobby, push past your comfort zones and what you think you know
  5. Give: Do something nice for a friend or stranger, volunteer your time; see yourself linked to the wider community
The NEF suggests that we try to engage in all five of these activities over the course of each day.

Seems simple, no? Go for it—choose to be happy. And live a longer, happier life.

Happy planet

It has been very encouraging to see the recent increase in attention given to happiness and well-being studies. It's about time we started measuring these things—and started making our governments more responsible and accountable in these areas. Studies are increasingly revealing that gains in GDP and overall wealth are not having an impact on our personal well-being. It would seem that the capitalist interpretation of the 'pursuit of happiness' is just that: a perpetual quest for an illusory goal that leaves us unsatisfied.

While seemingly counterintuitive, it seems that beyond at certain Maslovian point where our basic needs are met we hit a peak in our relative happiness. As behavioral economist Daniel Kahneman has revealed, once we hit $60,000 a year in salary we reach a kind of plateau with respect to the happiness of the experiencing self. It doesn't matter if you make $120,000 a year or $500,000 a year—according to his research you're going to be just as happy if you were making $60,000.

You should stop and think about that for a moment. That's a pretty amazing revelation.

Sure, you can convince yourself that you'd be happier in those higher salary ranges, but you'd be going against empirical data that suggests otherwise. What makes you think you'd be a special case?

Speaking of Kahneman, his TED Talk is one of the best out there:

Also along these lines is the Happy Planet Index (HPI), which measures human well-being and environmental impact. The HPI was introduced by the New Economics Foundation (NEF) in July 2006 and was designed to challenge well-established indices of countries’ development, such as Gross Domestic Product (GDP) and the Human Development Index (HDI), which are seen as not taking sustainability into account. According to the HFI, the five happiest countries in the world are Costa Rica, the Dominican Republic, Jamaica, Guatemala, and Vietnam—not exactly your G8 type of nations.

Statistician Nic Marks thinks he knows what's going on. Like so many others these days, he is starting to ask why we measure a nation's success by its productivity —instead of by the happiness and well-being of its people. This is why he introduced the HPI. Moreover, he believes that a happy life doesn't have to cost the earth.

Marks argues that quality of life is measurable, and that true contentment comes not from the accumulation of material wealth but from our connections with others, engagement with the world, and a sense of autonomy. To back his claims, Marks has created statistical methods to measure happiness, analyzing and interpreting the evidence so that it can be applied to such policy fields as education, sustainable development, healthcare, and economics.

You'll want to check out his recent TED talk:

The HPI is not perfect (it ignores issues like political freedom, human rights and labor rights, for example), but Marks is on the right track. It's about time we started shelving the Puritan Work Ethic in favor of those things that will truly improve the subjective quality of our lives.

August 28, 2010

Science to the rescue! Helping the Chilean miners survive their ordeal

Quite a story developing in Chile: the 33 miners who are trapped 700 meters underground will have to wait about four months before they are rescued. That's obviously not going to be easy on the men who have been trapped for over 18 days already. Keeping it together psychologically, physically and socially for that extent of time will undoubtedly prove challenging.

But this is where science can step in and help. While this incident is dreadful for all those involved, it'll be an extraordinary opportunity to apply our knowledge and help these guys cope with the extreme isolation — and at the same time learn from it. The whole experience will serve as an important case study across several scientific disciplines.

This is why NASA has been called in to help. The space agency obviously has tremendous experience and expertise managing extended bouts of isolation, both from physiological and psychological perspectives. While it's still early days, NASA has already recommended that the men keep exercising and adopt a routine where days are separated from nights.

As noted, the two primary areas of concern are the men's physical and psychological health. While food and supplies are now being routed through a six inch tunnel, there are still extreme limitations in terms of limited social contact and the problem of having so many men live in such close quarters. They are living without natural sunlight and the temperature is hovering around 29.5°C (81°F).

Compounding this is the stress the men must feel; avoiding an impending sense of doom and bouts of claustrophobia can't be easy. I myself suffer from mild claustrophobia and my stomach tightens at the very thought. As of today, at least five of the men are experiencing severe depression. Chilean officials have already sent down anti-depressants to help the miners cope — though the miners have reportedly asked for some beer. Advisors have made it clear that the men, in order to keep it together, must maintain a healthy sense of optimism. How this can be achieved is easier said than done, but regular contact with the surface and frequent updates about the rescue effort will go a long way.

From a social psychology standpoint, it's interesting to note that the men appear to be in good spirits. They've also put the Lord of the Flies myth to rest; soon after the ordeal started they began organizing themselves. They're holding daily meetings, assigning tasks, and portioning out rations (they actually made three days of rations last 17 days). Their ability to self-organize and maintain order given such extreme circumstances is proving to be as fascinating as it is praiseworthy.

NASA has suggested that the miners do regular exercises to prevent muscle atrophy as they await extraction. This will prove beneficial from not just a health standpoint, but from a psychological one as well; it's well documented that exercise tends to improve mood. There's also word that, in addition to food and medicine, other items such as flashlights and games are being sent down. It's also very likely that they will be given sun lamps to help simulate natural sunlight. Otherwise, it's very likely that Seasonal Affective Disorder (SAD) will kick in.

Scientists will undoubtedly learn from this episode and eventually apply the learnings to such things as a future expedition to Mars. The data will likely compliment that of the Mars isolation study currently underway in Russia. In this experiment, a crew of six volunteers are isolated from the outside world in a space measuring just 550 cubic meters.

Again, I hope that everything will turn out for the best and that this story has a happy ending.

August 26, 2010

Optimize your health with The Zone and Paleo diets [life extension]

If you're like most people these days you're probably very confused about what to do in terms of your diet, particularly as it pertains to maximizing your health and lifespan. For those of you who are serious about making substantive changes to your diet, I have a pair of recommendations to make that will take the guesswork out of your daily eating habits.

Now, before I get into them, these are not fad diets meant to help you lose weight. Sure, these diets can help you lose weight, but they're ultimately meant to help you optimize your eating habits, and by consequence improve your health. With these systems, food is looked at both from a therapeutic and aesthetic perspective; it's all about food that tastes good and is good for you. Consequently, these systems should be looked at as a part of a broader set of lifestyle changes. Many of us need to get over the quick fix mentality that's pervasive in diet culture, and instead make the commitment to permanently change the way we approach our food.

Now, the two systems I am referring to are the The Zone and the Paleolithic Diet. I don't wish to present these two options as the be-all-and-end-all of diets, nor as the best. I happen to have familiarity with these diets, and I know that they work. Moreover, they're a great place to start if you're feeling overwhelmed by all the diets out there.

The Zone

Developed by the biochemist Barry Sears, the Zone advocates consuming calories from carbohydrates, protein, and fat in a balanced ratio. It's a diet that requires weighing and measuring portions in accordance with your body composition (i.e. lean body mass) and degree of physical activity. This appeals to the science part of my brain; I know that the proportions have been carefully determined by experts. So, if you're not into weighing and measuring your food, you may as well skip down to the next section.

The Zone works off a "40:30:30" ratio of calories obtained daily from carbohydrates, proteins, and fats, respectively. The exact proportion of the macro nutrients are broken down into what are called blocks, and each meal consists of a certain amount of blocks that have to eaten in this particular configuration. So, if it's determined that you are a 16 block person, you should aim to eat a total of 16 blocks per day. One block is equal to 9 grams of carbohydrates, 7 grams of protein, and 1.5 grams of fat.

The Zone also promotes balanced eating throughout the day. Zoners eat about five meals a day, and because lots of protein and low density carbohydrates are encouraged, some of the meals can be substantial in size.

The reason for such strict proportions is to ensure proper hormonal balance. When insulin and glucagon levels are optimal, specific anti-inflammatory chemicals (namely eicosanoids) are released, which have similar effects to aspirin. A 30:40 ratio of protein to carbohydrates triggers this effect, and this is called 'being in the Zone.' Sears claims that these natural anti-inflammatories are both heart- and health-friendly.

When the human body is in caloric balance it is more efficient and does not have to store excess calories as fat. The human body cannot store fat and burn fat at the same time, and it takes time (significant time if insulin levels were high because of unbalanced eating) to switch from the former to the latter. Using stored fat for energy causes weight loss. Other positive effects of the diet include increased energy and mental clarity.

The Zone diet is very flexible in terms of the foods involved (you just have to be strict about proportions) and it is vegetarian friendly.

There's a lot more to this diet than what I've described, but this book will help you get you started.

The Zone is great if you feel helpless and know nothing about food. As long as you stick to the principles, you're golden. Moreover, it will help you learn about food and the kinds of portion sizes you should be striving for. Lastly, in terms of credibility, while there may be some skepticism about this approach, the Zone is commonly used by professional athletes (including CrossFitters) to help them dial-in and regulate their diet; many athletes swear by it as they see measurable improvements in their performance.

The Paleo Diet

Also referred to as the Cave Man Diet, the Paleo Diet is a nutritional plan that strives to emulate the eating habits of our Stone Age forebears. And this is for very good reason: while our eating habits have changed dramatically since Paleolithic times, our bodies have not. We are not genetically primed to ingest and metabolize most of the foods we eat today, particularly processed foods. The Paleo Diet, therefore, encourages its followers to eat the same foods our ancestors did prior to the Agricultural Revolution.

It has been generally observed that modern human populations subsisting on traditional diets similar to those of Paleolithic hunter-gatherers are largely free of diseases of affluence—namely type 2 diabetes, coronary heart disease, cerebrovascular disease, peripheral vascular disease, obesity and certain forms of cancer. In addition to this, studies of the Paleolithic diet in modern humans have shown some positive health outcomes.

As for the diet itself, it consists primarily of foods that can be hunted and fished, such as meat, offal and seafood, and can be gathered, such as eggs, insects, fruit, nuts, seeds, vegetables, mushrooms, herbs and spices. So, no bread, rice, pasta, and cherry cheese cake. Other exclusions include all grains, legumes (e.g. beans and peanuts), dairy products, salt, refined sugar and processed oils, (although some advocates consider the use of oils with low omega-6/omega-3 ratios, such as olive oil and canola oil, to be healthy and advisable).

Essentially, if your food has more than one ingredient, it's probably not Paleo friendly.

Obviously, the Paleo Diet is not easy for vegetarians, but it is possible. Vegetarians should ensure that they're getting adequate protein intake using egg powder protein shakes along with some supplements like Vitamin B12, Taurine, Carosine, and Carnitine. If you're vegan, you should probably forget this idea and consider The Zone instead.

All this said, Paleo omnivores typically eat more "ethical meat" than those who on other diets; they tend to eat only lean cuts of meat that are free of food additives and from wild game meats and grass-fed beef since they contain high levels of omega-3 fats compared with grain-produced domestic meats.

You can learn more about the Paleo diet here.

Concluding remarks

For most of us, adopting the Paleo Diet would represent a massive paradigm shift in our eating habits. Going from a completely unrestricted diet to this one is probably a bad idea. If you're looking to change your eating ways, I would strongly recommend that you start with the Zone and work your way from there.

Again, I'm sure there are other diets out there that may be just as good or better than the two I've proposed. But like I said, they're effective and relatively easy to adopt—so long as the will is there. And most importantly, you will start to notice a dramatic improvement in health and performance. Not to mention your waistline.

Personalized Life Extension Conference, October 9-10

Foresight Institute's Christine Peterson recently announced that the Personalized Life Extension Conference will be held at the San Francisco Airpot Marriott Hotel from October 9 to 10. The list of speakers looks phenomenal, as does the program.

Keynote speakers include Esther Dyson and Peter Thiel. They'll be joined by Terry Grossman, Greg Fahy, Sonia Arison, Bruce Ames, Gregory Benford, Melanie Swan, Patri Friedman and many others.

And if you thought there isn't really much you can do or talk about in the ways of personalized life extension, check out the list of strategies and tactics to be discussed:
  • Supplements
  • DNA Testing
  • Telomere Protection
  • Blood Testing
  • Finding a Life Extension Doctor
  • Gadgets
  • Inflammation
  • Calorie Restriction & Intermittent Fasting
  • Sleep
  • Stress reduction
  • Self-experimentation
  • Exercise
  • Enhancement & Brain Function
  • Eating
  • Standards of Information Quality
  • Mood
Be sure to register today.

Singularity Podcast interview now available

I was recently interviewed by Nikola Danaylov for the Singularity Podcast. You can listen to it here.

Matt Ridley: Criticism

A Sentient Developments reader directs our attention to this deliciously scathing review by George Monbiot of Matt Ridley and his work:
He uses [Rational Optimist] it as a platform to attack governments that, among other crimes, "bail out big corporations". He lambasts intervention and state regulation, insisting that markets deliver the greatest possible benefits to society when left to their own devices. Has there ever been a clearer case of the triumph of faith over experience?

Free-market fundamentalists, apparently unaware of Ridley's own experiment in market liberation, are currently filling cyberspace and the mainstream media with gasps of enthusiasm about his thesis. Ridley provides what he claims is a scientific justification for unregulated business. He maintains that rising consumption will keep enriching us for "centuries and millennia" to come, but only if governments don't impede innovation. He dismisses or denies the environmental consequences, laments our risk-aversion, and claims that the market system makes self-interest "thoroughly virtuous". All will be well in the best of all possible worlds, as long as the "parasitic bureaucracy" keeps its nose out of our lives.

His book is elegantly written and cast in the language of evolution, but it's the same old cornutopian nonsense we've heard one hundred times before (cornutopians are people who envisage a utopia of limitless abundance). In this case, however, it has already been spectacularly disproved by the author's experience.

The Rational Optimist is riddled with excruciating errors and distortions. Ridley claims, for instance, that "every country that tried protectionism" after the second world war suffered as a result. He cites South Korea and Taiwan as "countries that went the other way", and experienced miraculous growth. In reality, the governments of both nations subsidised key industries, actively promoted exports, and used tariffs and laws to shut out competing imports. In both countries the state owned all the major commercial banks, allowing it to make decisions about investment.

Matt Ridley is the Rational Optimist

Matt Ridley published a book earlier this year called, The Rational Optimist: How Prosperity Evolves. There's some good ammo in here for those futurists who are regularly accused of having too many starry eyed visions of what tomorrow holds—and for those who simply believe that the human condition is steadily improving.

Throughout history, says Ridley, the engine of human progress has been the meeting and mating of ideas to make new ideas. It's not important how clever individuals are, he argues, what really matters is how smart the collective brain is.

As an aside, and not that this is his particular argument, the notion of the 'collective brain' being superior to the super-enhanced human or artificial brain is an idea that's starting to gain some currency in futurist circles. Some argue that base intelligence doesn't matter. Rather, it's the collectivity of ideas that gives human civilization its power. I'm not quite sold on this premise, but it's certainly worth considering; I certainly recognize the realization that we are standing on the shoulders of giants.

Promo blurbage:
Life is getting better—and at an accelerating rate. Food availability, income, and life span are up; disease, child mortality, and violence are down — all across the globe. Though the world is far from perfect, necessities and luxuries alike are getting cheaper; population growth is slowing; Africa is following Asia out of poverty; the Internet, the mobile phone, and container shipping are enriching people’s lives as never before. The pessimists who dominate public discourse insist that we will soon reach a turning point and things will start to get worse. But they have been saying this for two hundred years.

Yet Matt Ridley does more than describe how things are getting better. He explains why. Prosperity comes from everybody working for everybody else. The habit of exchange and specialization—which started more than 100,000 years ago—has created a collective brain that sets human living standards on a rising trend. The mutual dependence, trust, and sharing that result are causes for hope, not despair.

This bold book covers the entire sweep of human history, from the Stone Age to the Internet, from the stagnation of the Ming empire to the invention of the steam engine, from the population explosion to the likely consequences of climate change. It ends with a confident assertion that thanks to the ceaseless capacity of the human race for innovative change, and despite inevitable disasters along the way, the twenty-first century will see both human prosperity and natural biodiversity enhanced. Acute, refreshing, and revelatory, The Rational Optimist will change your way of thinking about the world for the better.
The WSJ recently published an excerpt from his prologue:
To argue that human nature has not changed, but human culture has, does not mean rejecting evolution – quite the reverse. Humanity is experiencing an extraordinary burst of evolutionary change, driven by good old-fashioned Darwinian natural selection. But it is selection among ideas, not among genes. The habitat in which these ideas reside consists of human brains. This notion has been trying to surface in the social sciences for a long time. The French sociologist Gabriel Tarde wrote in 1888: 'We may call it social evolution when an invention quietly spreads through imitation.' The Austrian economist Friedrich Hayek wrote in the 1960s that in social evolution the decisive factor is 'selection by imitation of successful institutions and habits'. The evolutionary biologist Richard Dawkins in 1976 coined the term 'meme' for a unit of cultural imitation. The economist Richard Nelson in the 1980s proposed that whole economies evolve by natural selection.

This is what I mean when I talk of cultural evolution: at some point before 100,000 years ago culture itself began to evolve in a way that it never did in any other species – that is, to replicate, mutate, compete, select and accumulate – somewhat as genes had been doing for billions of years. Just like natural selection cumulatively building an eye bit by bit, so cultural evolution in human beings could cumulatively build a culture or a camera. Chimpanzees may teach each other how to spear bushbabies with sharpened sticks, and killer whales may teach each other how to snatch sea lions off beaches, but only human beings have the cumulative culture that goes into the design of a loaf of bread or a concerto.

Yes, but why? Why us and not killer whales? To say that people have cultural evolution is neither very original nor very helpful. Imitation and learning are not themselves enough, however richly and ingeniously they are practised, to explain why human beings began changing in this unique way. Something else is necessary; something that human beings have and killer whales do not. The answer, I believe, is that at some point in human history, ideas began to meet and mate, to have sex with each other.
Continue reading.

Ridley's recent TED talk:

August 25, 2010

Timothy Taylor's 'artificial ape'

Gizmodo has posted an interview with Timothy Taylor, author of The Artificial Ape: How Technology Changed the Course of Human Evolution.

Darwin, claims Taylor, was wrong in seeing human evolution as a result of the same processes that account for other evolution in the biological world—especially when it comes to the size of our cranium:
Darwin had to put large cranial size down to sexual selection, arguing that women found brainy men sexy. But biomechanical factors make this untenable. I call this the smart biped paradox: once you are an upright ape, all natural selection pressures should be in favour of retaining a small cranium. That's because walking upright means having a narrower pelvis, capping babies' head size, and a shorter digestive tract, making it harder to support big, energy-hungry brains. Clearly our big brains did evolve, but I think Darwin had the wrong mechanism. I believe it was technology. We were never fully biological entities. We are and always have been artificial apes.
Taylor also posits the notion of the 'survival of the weakest':
Technology allows us to accumulate biological deficits: we lost our sharp fingernails because we had cutting tools, we lost our heavy jaw musculature thanks to stone tools. These changes reduced our basic aggression, increased manual dexterity and made males and females more similar. Biological deficits continue today. For example, modern human eyesight is on average worse than that of humans 10,000 years ago.

Unlike other animals, we don't adapt to environments —we adapt environments to us. We just passed a point where more people on the planet live in cities than not. We are extended through our technology. We now know that Neanderthals were symbolic thinkers, probably made art, had exquisite tools and bigger brains. Does that mean they were smarter?

Evidence shows that over the last 30,000 years there has been an overall decrease in brain size and the trend seems to be continuing. That's because we can outsource our intelligence. I don't need to remember as much as a Neanderthal because I have a computer. I don't need such a dangerous and expensive-to-maintain biology any more. I would argue that humans are going to continue to get less biologically intelligent."
On the topic of the technological Singularity, Taylor agrees that intelligence is becoming technological—but that's how it's been from the start:
That's what it is to be human. And in that sense, there's nothing scary in [Kurzweil's] vision of artificial intelligence. I don't see any sign of intentionality in machine intelligence now. I'm not saying it will never happen, but I think it's a lot further away than Kurzweil says.

Will computers eventually be able to develop their own computers that are even smarter than them, creating a sudden acceleration that leaves the biological behind and leaves us as a kind of pond scum while the robots take over? That scenario implies a sharp division between humans and our technology, and I don't think such a division exists. Humans are artificial apes - we are biology plus technology. We are the first creatures to exist in that nexus, not purely Darwinian entities. Kurzweil says that the technological realm cannot be reduced to the biological, so there we agree.

Phillippe Verdoux on the enhancement paradox

IEET contributor Phillippe Verdoux wonders if enhancing is necessary in order to decide whether or not enhancing is a good idea:
Many transhumanists are enthusiastic about the possibilities of cognitive enhancement. Such enthusiasts might say something like: “I want to use advanced technologies – from genetic engineering and psychoactive pharmaceuticals to neural implants and even mind-uploading – to increase my intelligence, to make me ‘smarter, wiser, or more creative’ [PDF], to produce a ‘smarter and more virtuous’ person, to mentally and emotionally augment myself.” of enhancement presupposes some conception of the self. Specifically, it assumes that the self is capable of enduring such modifications, e.g., as a pattern, or as an immaterial soul, or whatever. The resulting enhanced being would thus still be me, it would just be a different and “better” (according to some set of criteria) version of me.
Now, an interesting paradox arises when one combines the above claims with a specific (and controversial) stance on what the self is...
More importantly, though, it must be pointed out that cognitive enhancement is only one route to the destination of greater-than-human-intelligence: the other is artificial intelligence (AI). Another option would thus be to create a superintelligent AI system that could help us deliberate about whether or not we should use cognitive enhancements. This would offer a way out of the paradox, since it doesn’t involve modifying ourselves.

The trouble is, however, that AI may turn out to be more difficult than enhancing the neurobiological core of Homo sapiens, which means that the paradox would remain intact: in this case, the most feasible way to engender a new species of ultra-smart posthumans would be through human enhancement and not AI.

Finally, one could generalize the basic idea to AI as well. That is, we might pose a general moral question about whether or not it would be good to create a species of posthumans through either method of enhancement or AI. Our ability to answer this question, though, is no doubt far more limited than the ability of a superintelligent biotechnological hybrid or completely synthetic posthuman to answer it.

Artificially fabricated cornea integrated with human eye

A new study from researchers in Canada and Sweden has shown that biosynthetic corneas can help regenerate and repair damaged eye tissue and improve vision in humans.

"This study is important because it is the first to show that an artificially fabricated cornea can integrate with the human eye and stimulate regeneration," said senior author Dr. May Griffith of the Ottawa Hospital Research Institute, the University of Ottawa and Linköping University. "With further research, this approach could help restore sight to millions of people who are waiting for a donated human cornea for transplantation."

Dr. Griffith and her colleagues began developing biosynthetic corneas in Ottawa, Canada more than a decade ago, using collagen produced in the laboratory and moulded into the shape of a cornea. After extensive laboratory testing, Dr. Griffith began collaborating with Dr. Per Fagerholm, an eye surgeon at Linköping University in Sweden, to provide the first-in-human experience with biosynthetic cornea implantation.

Together, they initiated a clinical trial in 10 Swedish patients with advanced keratoconus or central corneal scarring. Each patient underwent surgery on one eye to remove damaged corneal tissue and replace it with the biosynthetic cornea, made from synthetically cross-linked recombinant human collagen. Over two years of follow-up, the researchers observed that cells and nerves from the patients' own corneas had grown into the implant, resulting in a "regenerated" cornea that resembled normal, healthy tissue. Patients did not experience any rejection reaction or require long-term immune suppression, which are serious side effects associated with the use of human donor tissue. The biosynthetic corneas also became sensitive to touch and began producing normal tears to keep the eye oxygenated. Vision improved in six of the ten patients, and after contact lens fitting, vision was comparable to conventional corneal transplantation with human donor tissue.

"We are very encouraged by these results and by the great potential of biosynthetic corneas," said Dr. Fagerholm. "Further biomaterial enhancements and modifications to the surgical technique are ongoing, and new studies are being planned that will extend the use of the biosynthetic cornea to a wider range of sight-threatening conditions requiring transplantation."


August 24, 2010

Hauser guilty of science misconduct

Man, it pains me to post this, but it's important to know: Harvard says Marc Hauser is guilty of science misconduct.

Hauser, the author of Moral minds: How nature designed a universal sense of right and wrong is a noted researcher in the field of animal cognition. He had been placed on leave following accusations by his students that he had purposely fabricated data in his research. Hauser's work relied on observing responses by tamarin monkeys to stimuli such as changes in sound patterns, claiming they possessed thinking skills often viewed as unique to humans and apes.

Hauser has posted a response to the charge:
I am deeply sorry for the problems this case has caused to my students, my colleagues, and my university..

I acknowledge that I made some significant mistakes and I am deeply disappointed that this has led to a retraction and two corrections. I also feel terrible about the concerns regarding the other five cases, which involved either unpublished work or studies in which the record was corrected before submission for publication.

I hope that the scientific community will now wait for the federal investigative agencies to make their final conclusions based on the material that they have available.

I have learned a great deal from this process and have made many changes in my own approach to research and in my lab's research practices.

Research and teaching are my passion. After taking some time off, I look forward to getting back to my work, mindful of what I have learned in this case. This has been painful for me and those who have been associated with the work.
Emory University primate researcher Frans de Waal has chimed in:
It is good that Harvard now confirms the rumors, so that there is no doubt that they found actual scientific misconduct, and that they will take appropriate action. But it leaves open whether we in the field of animal behavior should just worry about those three articles or about many more, and then there are also publications related to language and morality that include data that are now in question. From my reading of the dean's letter, it seems that all data produced by this lab over the years are potentially in question.
As has psychologist David Premack:
Dishonesty in cognitive science is somehow more disturbing than dishonesty in biology or physical science. The latter threatens the lives of people, producing a kind of harm we readily comprehend. The former puzzles us: it produces no physical harm, but threatens our standards, a kind of harm we do not readily understand. Because he caused no physical harm, we see him as discrediting everything he touched, including science itself. Hauser, a gifted writer, had no need for shortcuts.

Speeding up supercomputer simulations using Einsteinian relativity

Writing in Technology Review, Christopher Mims is reporting on how physicists will soon be able to use Einstein's relativity to speed up supercomputer simulations by as much as 10,000%. He says it's not the algorithm or the hardware, but the reference frame that needed an update:
Physicists realized that because the laser is accelerating electrons in its path to nearly the speed of light, Relativistic effects start to be a big deal - the same effects first discovered by Albert Einstein.

And if we remember anything from A Brief History of Time or even the original Planet of the Apes, it's that at speeds approaching the speed of light, where the observer is standing has a huge impact on what they perceive - this is, for example, the reason that an astronaut traveling close to the speed of light would age much slower than the people he or she left behind on earth.

Previously, all simulations of laser-plasma accelerators were run from the perspective of a physicist standing somewhere in the vicinity of the experiment - in other words, someone who sees a super short laser pulse traveling at a near-stationary plasma. Mathematically, this is very hard to simulate - the laser is brief.

But what if, instead, we take the perspective of the plasma itself? Now, relative to the laser, it's as if the plasma is traveling toward the beam of light at near-light speed. Because of relativistic effects, this stretches out the beam of the laser, making it longer and mathematically more tractable to simulate.

Voila - the resulting algorithm is hundreds of times faster than previous attempts to simulate a laser-plasma accelerator.

August 23, 2010

NYT: Semenya Is Back, but Acceptance Lags

As I predicted, the return of intersexed athlete Caster Semeyna to the track is not being received very well by her fellow competitors. From the New York Times:
Like it or not — and many elite runners clearly do not — track and field’s governing body, the International Association of Athletics Federations, has ruled, after an excruciatingly lengthy process, that Semenya may compete as a woman.

Barring an intense and unlikely legal counterattack, it is difficult to imagine the association backtracking on Semenya now. Not after being justly pilloried for allowing news of her initial gender tests to become public; not after taking 11 months to clarify and confirm her eligibility, leaving her in a brutal state of limbo.

It's not all about Ray: There's more to Singularity studies than Kurzweil

I'm finding myself a bit disturbed these days about how fashionable it has become to hate Ray Kurzweil.

It wasn't too long ago, with the publication of The Age of Spiritual Machines, that he was the cause célèbre of our time. I'm somewhat at a loss to explain what has happened in the public's mind since then; his ideas certainly haven't changed all that much. Perhaps it's a collective impatience with his timelines; the fact that it isn't 2049 yet has led to disillusionment. Or maybe it's because people are afraid of buying into a set of predictions that may never come true—a kind of protection against disappointment or looking foolish.

What's more likely, however, is that his ideas have reached a much wider audience since the release of Spiritual Machines and The Singularity is Near. In the early days his work was picked up by a community who was already primed to accept these sorts of wide-eyed speculations as a valid line of inquiry. These days, everybody and his brother knows about Kurzweil. This has naturally led to an increased chorus of criticism by those who take issue with his thesis—both from experts and non-experts alike.

As a consequence of this popularity and infamy, Ray has been given a kind of unwarranted ownership over the term 'Singularity.' This has proven problematic on several levels, including the fact that his particular definition and description of the technological singularity is probably not the best one. Kurzweil has essentially equated the Singularity with the steady, accelerating growth of all technologies, including intelligence. His definition, along with its rather ambiguous implications, is inconsistent with the going definition used by other Singuarlity scholars, that of it being an 'intelligence explosion' caused by the positive feedback of recursively improving machine intelligences.

Moreover, and more importantly, Ray Kurzweil is one voice among many in a community of thinkers who have been tackling this problem for over half a century. What's particularly frustrating these days is that, because Kurzweil has become synonymous with the Singularity concept, and because so many people have been caught in the hate-Ray trend, people are throwing out the Singularity baby with the bathwater while drowning out all other voices. This is not only stupid and unfair, it's potentially dangerous; Singularity studies may prove crucial to the creation of a survivable future.

Consequently, for those readers new to these ideas and this particular community, I have prepared a short list of key players whose work is worth deeper investigation. Their work extends and complements the work of Ray Kurzweil in many respects. And in some cases they present an entirely different vision altogether. But what matters here is that these are all credible academics and thinkers who have worked or who are working on this important subject.

Please note that this is not meant to be a comprehensive list, so if you or your favorite thinker is not on here just take a chill pill and add a post to the comments section along with some context.
  • Jon von Neumann: The brilliant Hungarian-American mathematician and computer scientist, John von Neumann is regarded as the first person to use the term 'Singularity' in describing a future event. Speaking with Stanislaw Ulam in 1958, von Neumann made note of the accelerating progress of technology and constant changes to human life. He felt that this tendency was giving the appearance of our approaching some essential singularity beyond which human affairs, as we know them, could not continue. In this sense, von Neumann's definition is more a declaration of an event horizon.
  • I. J. Good: One of the first and best definitions of the Singularity was put forth by mathematician I. G. Good. Back in 1965 he wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they might be able to improve their own designs in ways unforeseen by their designers and thus recursively augment themselves into far greater intelligences. He thought that, while the first set of improvements might be small, machines could quickly become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a Singularity).
  • Marvin Minsky: Inventor and author, Minsky is universally regarded as one of the world's leading authorities in artificial intelligence. He has made fundamental contributions to the fields of robotics and computer-aided learning technologies. Some of his most notable books include The Society of Mind, Perceptrons, and The Emotion Machine. Ray Kurzweil calls him his most important mentor. Minsky argues that our increasing knowledge of the brain and increasing computer power will eventually intersect, likely leading to machine minds and a potential Singularity.
  • Vernor Vinge: In 1983, science fiction writer Vernor Vinge rekindled interest in Singularity studies by publishing an article about the subject in Omni magazine. Later, in 1993, he expanded on his thoughts in the article, "The Coming Technological Singularity: How to Survive in the Post-Human Era." He (now famously) wrote, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Inspired by I. J. Good, he argued that superhuman intelligence would be able enhance itself faster than the humans who created them. He noted that, "When greater-than-human intelligence drives progress, that progress will be much more rapid." He speculated that this feedback loop of self-improving intelligence could cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. Pre-dating Kurzweil, Vinge used Moore's law in an attempt to predict the arrival of artificial intelligence.
  • Hans Moravec: Carnegie Mellon roboticist Hans Moravec is a visionary thinker who is best known for his 1988 book, Mind Children, where he outlines Moore's law and his predictions about the future of artificial life. Moravec's primary thesis is that humanity, through the development of robotics and AI, will eventually spawn their own successors (which he predicts to be around 2030-2040). He is also the author of Robot: Mere Machine to Transcendent Mind (1998) in which he further refined his ideas. Moravec writes, "It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half–century of development. Indeed, for that reason, many long–time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty."
  • Robin Hanson: Associate professor of economics at George Mason University, Robin Hanson has taken the "Singularity" term to to refer to sharp increases in the exponent of economic growth. He lists the agricultural and industrial revolutions as past "singularities." Extrapolating from such past events, he proposes that the next economic singularity should increase economic growth between 60 and 250 times. Hanson contends that such an event could be triggered by an innovation that allows for the replacement of virtually all human labor, such as mind uploads and virtually limitless copying.
  • Nick Bostrom: University of Oxford's Nick Bostrom has done seminal work in this field. In 1998 he published, "How Long Before Superintelligence," in which he argued that superhuman artificial intelligence would likely emerge within the first third of the 21st century. He reached this conclusion by looking at various factors, including different estimates of the processing power of the human brain, trends in technological advancement and how fast superintelligence might be developed once there is human-level artificial intelligence.
  • Eliezer Yudkowsky: Artificial intelligence researcher Eliezer Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). He is the author of "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities.
  • David Chalmers: An important figure in philosophy of mind studies and neuroscience, David Chalmers has a unique take on the Singularity where he argues that it will happen through self-amplifying intelligence. The only requirement, he claims, is that an intelligent machine be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. The most plausible way, he says, is simulated evolution. Chalmers feels that if we get to above-human intelligence it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.
Like I said, this is a partial list, but it's a good place to start. Other seminal thinkers include Alan Turing, Alvin Toffler, Eric Drexler, Ben Goertzel, Anders Sandberg, John Smart, Shane Legg, Marin Rees, Stephen Hawking and many, many others. I strongly encourage everyone, including skeptics, to take a deeper look into their work.

And as for the all the anti-Kurzweil sentiment, all I can say is that I hope to see it pass. There is no good reason why he—and others—shouldn't explore this important area. Sure, it may turn out that everyone was wrong and that the future isn't at all what we expected. But as Enrico Fermi once said, "There's two possible outcomes: if the result confirms the hypothesis, then you've made a discovery. If the result is contrary to the hypothesis, then you've made a discovery."

Regardless of the outcome, let's make a discovery.

Susan Blackmore: Memes, temes and the 'third replicator'

Neat article by Susan Blackmore in the New York Times about information and how it's subject to Darwinian processes. Blackmore's focus in the essay is on the copying aspect where she presents the idea of the third replicator, namely 'technological memes', or what she's dubbed 'temes'.

"They are digital information stored, copied, varied and selected by machines," she writes, "We humans like to think we are the designers, creators and controllers of this newly emerging world but really we are stepping stones from one replicator to the next."

Blackmore continues:
Computers handle vast quantities of information with extraordinarily high-fidelity copying and storage. Most variation and selection is still done by human beings, with their biologically evolved desires for stimulation, amusement, communication, sex and food. But this is changing. Already there are examples of computer programs recombining old texts to create new essays or poems, translating texts to create new versions, and selecting between vast quantities of text, images and data. Above all there are search engines. Each request to Google, Alta Vista or Yahoo! elicits a new set of pages — a new combination of items selected by that search engine according to its own clever algorithms and depending on myriad previous searches and link structures.

This is a radically new kind of copying, varying and selecting, and means that a new evolutionary process is starting up. This copying is quite different from the way cells copy strands of DNA or humans copy memes. The information itself is also different, consisting of highly stable digital information stored and processed by machines rather than living cells. This, I submit, signals the emergence of temes and teme machines, the third replicator.

What should we expect of this dramatic step? It might make as much difference as the advent of human imitation did. Just as human meme machines spread over the planet, using up its resources and altering its ecosystems to suit their own needs, so the new teme machines will do the same, only faster. Indeed we might see our current ecological troubles not as primarily our fault, but as the inevitable consequence of earth’s transition to being a three-replicator planet. We willingly provide ever more energy to power the Internet, and there is enormous scope for teme machines to grow, evolve and create ever more extraordinary digital worlds, some aided by humans and others independent of them. We are still needed, not least to run the power stations, but as the temes proliferate, using ever more energy and resources, our own role becomes ever less significant, even though we set the whole new evolutionary process in motion in the first place.


August 22, 2010

SETI on the lookout for artificial intelligence

Slowly but surely, SETI is starting to get the picture: If we're going to find life out there—and that's a big if—it's probably not going to be biological. Writing in Acta Astronautica, SETI's Seth Shostak says that the odds likely favour detecting machine intelligences rather than "biological" life.

Yay to SETI for finally figuring this out; shame on SETI for taking so long to acknowledge this. Marvin Minsky has been telling them to do so since the Byurakan SETI conference in 1971.

John Elliott, a SETI research veteran based at Leeds Metropolitan University, UK, agrees. "...having now looked for signals for 50 years, SETI is going through a process of realising the way our technology is advancing is probably a good indicator of how other civilisations—if they're out there—would've progressed. Certainly what we're looking at out there is an evolutionary moving target."

Both Shostak and Elliott admit that finding and decoding any eventual message from thinking machines may prove more difficult than in the "biological" case, but the idea does provide new directions to look. Shostak believes that artificially intelligent alien life would be likely to migrate to places where both matter and energy—the only things he says would be of interest to the machines—would be in plentiful supply. That means the SETI hunt may need to focus its attentions near hot, young stars or even near the centres of galaxies.

Personally, I find that last claim to be a bit dubious. While I agree that matter and energy will be important to an advanced machine-based civilization, close proximity to the Galaxy's centre poses a new set of problems, including an increased chance of running into gamma ray bursters and black holes, not to mention the problem of heat—which for a supercomputing civilization will be extremely problematic.

Moreover, SETI still needs to acknowledge that the odds of finding ETIs is close to nil. Instead, Shostak and company are droning on about how we'll likely find traces in about 25 years or so. Such an acknowledgement isn't likely going to happen; making a concession like that would likely mean they'd lose funding and have to close up shop.

So their search continues...


Mind Wars: Brain Research and National Defense [book]

Along the lines of my previous post on neurosecurity and information warfare, check out this book by Jonathan D. Moreno: Mind Wars: Brain Research and National Defense (2006). Synopsis:
Imagine a future conflict in which one side can scan from a distance the brains of soldiers on the other side and learn what they may be planning or whether they are confident or fearful. In a crisply written book, University of Virginia ethicist Moreno notes that military contractors have been researching this possibility, as well as the use of electrodes embedded in soldiers' and pilots' brains to enhance their fighting ability. Moreno (Is There an Ethicist in the House?) details the Pentagon's interest in such matters, including studies of paranormal phenomena like ESP, going back several decades. Readers learn that techniques like hypersonic sound and targeted energetic pulses to disable soldiers are close to being used in the field, and even have everyday applications that make "targeted advertising" an understatement. Despite the book's title, Moreno doesn't limit his discussion to brain-related research; he explains the military's investigation of how to enhance soldiers' endurance and reaction time in combat as well as various nonlethal disabling technologies. The ethical implications are addressed throughout the book, but the author leaves substantive discussion to his praiseworthy last chapter.
I really don't know what to make of these claims that the US military is delving into the paranormal. Almost sounds like deliberate disinformation. Or that the higher-ups can't distinguish between sound scientific principles and the work of quacks.

Neurosecurity: The mind has no firewall

Neurosecurity and the potential for so-called 'mind hacking' has interested me for quite some time now, so I was surprised to discover that this topic was covered back in 1997 by Timothy. L Thomas. Writing in Parameters, the US army war college journal, Thomas warned that the American military risked falling behind in the burgeoning field of information warfare.

His particular concern was that military systems operators could be exploited as 'open systems.' "We need to spend more time researching how to protect the humans in our data management structures," he writes, "Nothing in those structures can be sustained if our operators have been debilitated by potential adversaries or terrorists who--right now--may be designing the means to disrupt the human component of our carefully constructed notion of a system of systems."

Thomas continues,
This "systems" approach to the study of information warfare emphasizes the use of data, referred to as information, to penetrate an adversary's physical defenses that protect data (information) in order to obtain operational or strategic advantage. It has tended to ignore the role of the human body as an information- or data-processor in this quest for dominance except in those cases where an individual's logic or rational thought may be upset via disinformation or deception. As a consequence little attention is directed toward protecting the mind and body with a firewall as we have done with hardware systems. Nor have any techniques for doing so been prescribed. Yet the body is capable not only of being deceived, manipulated, or misinformed but also shut down or destroyed--just as any other data-processing system. The "data" the body receives from external sources--such as electromagnetic, vortex, or acoustic energy waves--or creates through its own electrical or chemical stimuli can be manipulated or changed just as the data (information) in any hardware system can be altered.
Others, however, look beyond simple PSYOP ties to consider other aspects of the body's data-processing capability. One of the principal open source researchers on the relationship of information warfare to the body's data-processing capability is Russian Dr. Victor Solntsev of the Baumann Technical Institute in Moscow. Solntsev is a young, well-intentioned researcher striving to point out to the world the potential dangers of the computer operator interface. Supported by a network of institutes and academies, Solntsev has produced some interesting concepts. He insists that man must be viewed as an open system instead of simply as an organism or closed system. As an open system, man communicates with his environment through information flows and communications media. One's physical environment, whether through electromagnetic, gravitational, acoustic, or other effects, can cause a change in the psycho-physiological condition of an organism, in Solntsev's opinion. Change of this sort could directly affect the mental state and consciousness of a computer operator. This would not be electronic war or information warfare in the traditional sense, but rather in a nontraditional and non-US sense. It might encompass, for example, a computer modified to become a weapon by using its energy output to emit acoustics that debilitate the operator. It also might encompass, as indicated below, futuristic weapons aimed against man's "open system."
There's some great food for thought here, but as an important aside, it's worth noting that this article has a high bullshit to reality ratio. Overly enamored by the pseudoscientific areas of inquiry explored by his Russian colleagues, Thomas, quite bizarrely, placed as much credence on the development of viable alternative weapons (such as energy-based and psychotronic weapons) as he did on paranormal weapons. Consequently, the credibility of the entire article has to be thrown into question; I advise you to read this essay with a considerable grain of salt.

Link: "The Mind Has No Firewall"


Latest Skeptically Speaking broadcast now available

For those who missed my debate with Greg Fish on Skeptically Speaking Radio, you can now listen to the show in its entirety.

August 21, 2010

David Chalmers: Consciousness is not substrate dependent

A popular argument against uploads and whole brain emulation is that consciousness is somehow rooted in the physical, biological realm. Back in 1995, philosopher David Chalmers addressed this problem in his seminal paper, "Absent Qualia, Fading Qualia, Dancing Qualia." His abstract reads,
It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing "over and above" the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws' antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper.
Chalmers sets up a series of arguments and thought experiments which point to the conclusion that functional organization suffices for conscious experience, what he calls nonreductive functionalism. He argues that conscious experience is determined by functional organization without necessarily being reducible to functional organization. This bodes well for the AI and whole brain emulation camp.

Chalmers concludes:
In any case, the conclusion is a strong one. It tells us that systems that duplicate our functional organization will be conscious even if they are made of silicon, constructed out of water-pipes, or instantiated in an entire population. The arguments in this paper can thus be seen as offering support to some of the ambitions of artificial intelligence. The arguments also make progress in constraining the principles in virtue of which consciousness depends on the physical. If successful, they show that biochemical and other non-organizational properties are at best indirectly relevant to the instantiation of experience, relevant only insofar as they play a role in determining functional organization.

Of course, the principle of organizational invariance is not the last word in constructing a theory of conscious experience. There are many unanswered questions: we would like to know just what sort of organization gives rise to experience, and what sort of experience we should expect a given organization to give rise to. Further, the principle is not cast at the right level to be a truly fundamental theory of consciousness; eventually, we would like to construct a fundamental theory that has the principle as a consequence. In the meantime, the principle acts as a strong constraint on an ultimate theory.
Entire paper.

Daniel Dennett on domesticating the wild memes of religion

Check out this fantastic talk by philosopher Daniel Dennett from 2006 about religion as a natural phenomenon and how it works to manipulate its hosts for replicative purposes. Be sure to set aside the 40 minutes and watch the entire lecture.

If you're interested in this topic, be sure to check out his book, Breaking the Spell: Religion as a Natural Phenomenon.

The art of Pixelnase

More about this artist.

Peter Molyneux demos Milo, the virtual boy, at TED

There's more sell than substance in this presentation, but the concept is interesting. It's worth noting that this is still very far from true AGI, and something more akin to a glorified chatbot.

Making brains: Reverse engineering the human brain to achieve AI

The ongoing debate between PZ Myers and Ray Kurzweil about reverse engineering the human brain is fairly representative of the same debate that's been going in futurist circles for quite some time now. And as the Myers/Kurzweil conversation attests, there is little consensus on the best way for us to achieve human-equivalent AI.

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil's upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.

While I believe that reverse engineering the human brain is the right approach, I admit that it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't exist yet. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

But we have to start somewhere, and we have to start with a plan.

Rules-based AI versus whole brain emulation

Now, some computer theorists maintain that the rules-based approach to AI will get us there first. Ben Goertzel is one such theorist. I had a chance to debate this with him at the recent H+ Summit at Harvard. His basic argument is that the WBE approach over-complexifies the issue. "We didn't have to reverse engineer the bird to learn how to fly," he told me. Essentially, Goertzel is confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach; it'll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits we're looking for in an AGI—things like learning and adaptation. As for the WBE approach, Goertzel thinks it's overkill and overly time consuming. But he did concede to me that he thinks the approach is sound in principle.

This approach aside, like Kurzweil, Bostrom, Sandberg and a growing number of other thinkers, I am drawn to the WBE camp. The idea of reverse engineering the human brain makes sense to me. Unlike the rules-based approach, WBE works off a tried-and-true working model; we're not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, was able to create the human brain—and all without a preconceived design. There's no reason to believe that we can't figure out how this was done; if the brain could come about through autonomous processes, then it can most certainly come about through the diligent work of intelligent researchers.

Emulation, simulation and cognitive functionalism

Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn't mean recreating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the recreation of all its properties in an alternative substrate, namely a computer system.

Moreover, emulation is not simulation. We're not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it's a complete 1:1 emulation that we're after.

Now, given that we're looking to model the human brain in digital substrate, we have to work according to a rather fundamental assumption: computational functionalism. This goes back to the Church-Turing thesis which states that a Turing machine can emulate any other Turing machine. Essentially, this means that every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine. Like a computer.

So, if you believe that there's something mystical or vital about human cognition you should probably stop reading now.

Or, if you believe that there's something inherently physical about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly—keeping in mind that any informational process is computational, including those brought about by chemical reactions. Moreover, intelligence, which is what we're after here, is something that's intrinsically non-physical to begin with.

The roadmap to whole brain emulation

A number of critics point out that we'll never emulate a human brain on account of the chaos and complexity inherent in such a system. On this point I'll disagree. As Bostrom and Sandberg have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.

In order to gain this "low-level functional understanding" of the human brain we will need to employ a series of interdisciplinary approaches (most of which are currently underway). Specifically, we're going to require advances in:
  • Computer science: We have to improve the hardware component; we're going to need machines with the processing power required to host a human brain; we're also going to need to improve the software component so that we can create algorithmic correlates to specific brain function.
  • Microscopy and scanning technologies: We need to better study and map the brain at the physical level; brain slicing techniques will allow us to visibly study cognitive action down to the molecular scale; specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.
  • Neurosciences: We need more impactful advances in the neurosciences so that we may better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).
  • Genetics: We need to get better at reading our DNA for clues about how the brain is constructed. While I agree that our DNA will not tell us how to build a fully functional brain, it will tell us how to start the process of brain-building from scratch.
Essentially, WBE requires three main capabilities: (1) the ability to physically scan brains in order to acquire the necessary information, (2) the ability to interpret the scanned data to build a software model, and (3) the ability to simulate this very large model.


Inevitably the question as to 'when' crops up. Personally, I could care less. I'm more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil's prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we're still likely heading down some blind alleys.

My own feeling is that we'll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I'm pulling this figure out of my butt as I really have no idea. It's more a feeling than a scientifically-backed estimate.

Lastly, it's worth noting that, given the capacity to recreate a human brain in digital substrate, we won't be too far off from creating considerably greater than human intelligence. Computer theorist Eliezer Yudkowsky has claimed that, because of the brain's particular architecture, we may be able to accelerate its processing speed by a factor of a million relatively easily. Consequently, predictions as to when we may hit the Singularity will likely co-incide with the advent of a fully emulated human brain.

Myers still thinks Kurzweil does not understand the brain

The blog war between PZ Myers and Ray Kurzweil continues. Myers has now retorted to Kurzweil's retort: can't measure the number of transistors in an Intel CPU and then announce, "A-ha! We now understand what a small amount of information is actually required to create all those operating systems and computer games and Microsoft Word, and it is much, much smaller than everyone is assuming." Put it in those terms, and the Kurzweil fanboys would laugh at him; put it in terms of something they don't understand at all, like the development and function of the brain, and they're willing to go along with the pretense that the genome tells us that the whole organism is simpler than they thought.

I presume they understand that if you program a perfect Intel emulator, you don't suddenly get Halo: Reach for free, as an emergent property of the system. You can buy the code and add it to the system, sure, but in this case, we can't run down to GameStop and buy a DVD with the human OS in it and install it on our artificial brain. You're going to have to do the hard work of figuring out how that works and reverse engineering it, as well. And understanding how the processor works is necessary to do that, but not sufficient.
Myers concludes,
In short, here's Kurzweil's claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it's basic principles of operation completely within a few decades. My counterargument, which he hasn't addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we've only scratched the surface in the last several decades of research, 3) "exponential" is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we'll be 'reverse engineering' the human brain. He's now at least clearly stating that decoding the genome does not generate the necessary information — it's just an argument that the brain isn't as complex as we thought, which I've already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don't see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.