May 28, 2011

PopSci on the "League of Performance-Enhanced Athletes"

I've been arguing for the establishment of performance-enhanced leagues for years now, so it's nice to see Ryan Bradley of Popular Science make the same case:
Through this openness the league creates an environment where cutting-edge science is discussed daily, and celebrated, alongside athletic triumph. Better still: legitimizing enhancement would make the enhancements better. More drugs hit the market, more treatments become available, and this would trickle down to non-athletes. Would all this openness and advancement foster a more honest, inviting, even wholesome environment? Maybe. Creating a separate league where drugs are legal would, without a doubt, make competition safer for athletes. Matthew Herper, who has covered science and health (and, by extension, athletes and drugs) for a decade at Forbes, says as much.

Science, Feature, athletes, competition, cycling, doping, drugs, lance armstrong, sports, steroids“To me, the most obvious solution has always been to legalize those drugs that work, and to experimentally monitor new entrants, including dietary supplements, for both efficacy and safety. Biological improvement would be treated much as athletic equipment like baseball bats and running shoes. This could improve both athlete’s performance and their health, and would be a lot better than having everybody trying whatever additive they can sneak, attempting to stay ahead of drug tests, and trusting anecdotes as a way of measuring safety and efficacy.”

But perhaps most importantly, by keeping advances off the field, we’re holding back possibilities. A few years ago I visited Hugh Herr, the director of biomechatronics at MIT’s Media Lab, who had just invented a robotic ankle that would soon revolutionize prosthetics. We ended up discussing the ankle a little bit, but mostly we talked about science in sports. Herr is an athlete. As a young man he was a world-class rock climber. A week before my visit, he had been busy trying to convince the International Association of Athletics Federations to allow South African runner Oscar Pistorius to compete in the Olympics. Pistorius has no legs below his knees and runs using Cheetah Flex-Foot carbon fiber limbs which, arguably, gives him an unfair advantage. Herr is also a double amputee, and walks and climbs using prosthetics. That day in his lab, while he showed me his improved ankle and described his work with veterans, Herr told me that he sees no reason why we can’t make “disabled” people stronger and faster than the rest of us. In fact, we already are: just look at Pistorius. The IAAF agreed and, weeks later, decided to ban the South African from competition.

May 26, 2011

Two very transhumanist-themed trailers: Limitless and Captain America

Bayes: The Theory That Would Not Die [book]

A new book on Bayes Theorem has just been published: The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy by Sharon Bertsch McGrayne. From the SciAm review, Why Bayes Rules: The History of a Formula That Drives Modern Life:
Discovered by English clergyman Thomas Bayes, the formula is a simple one-liner: Initial Beliefs + Recent Objective Data = A New and Improved Belief. A modern form comes from French mathematician Pierre-Simon Laplace, who, by recalculating the equation each time he got new data, could distinguish highly probable hypotheses from less valid ones. One of his applications involved explaining why slightly more boys than girls were born in Paris in the late 1700s. After collecting demographic data from around the world for 30 years, he concluded that the boy-girl ratio is universal to humankind and determined by biology.

Many theoretical statisticians over the years have assailed Bayesian methods as subjective. Yet decision makers insist that they bring clarity when information is scarce and outcomes uncertain. During the 1970s John Nicholson, the U.S. submarine fleet commander in the Mediterranean, used Bayesian computer analysis to figure out the most probable paths of Soviet nuclear subs. Today Bayesian math helps sort spam from e-mail, assess medical and homeland security risks and decode DNA, among other things.

Now Bayes is revolutionizing robotics, says Sebastian Thrun, director of Stanford University’s Artificial Intelligence Laboratory and Google’s driverless car project. By expressing all information in terms of probability distributions, Bayes can produce reliable estimates from scant and uncertain evidence.

May 25, 2011

Crystal Pite's Dark Matters [dance]

Canadian dancer/choreographer Crystal Pite's newest piece, Dark Matters, addresses some transhumanist themes, namely the creation of artificial progeny and its potentially grave and unpredictable implications. The title refers both to astrophysics and human impulses, exploring the idea of undetectable forces at work in cosmology and human affairs.

Dark Matters is a theatrical hybrid of puppetry and dance that opens as a sinister fable in which an inventor creates a puppet (or is that a robot?) with fateful results—and all expressed through contemporary ballet.

May 24, 2011

Designer Psychologies: Moving beyond neurotypicality

Designer psychologies, or customized cognitive processing modalities, describes the potential for future individuals to selectively alter the specific and unique ways in which they take in, analyze and perceive the world. Cognitive modalities are the psychological frameworks that allow for person-to-person variances in subjectivity, including such things as emotional responses, social engagement, aesthetics and prioritization. The day is coming when we'll be able to decide for ourselves how it is exactly that we want to process our world.

Most of us have the so-called neurotypical cognitive response. We know, however, mostly through our interactions with those outside of the cognitive norm, that neurotypicality is not the be-all and end-all of psychological experience. As the Autism Rights Movement has demonstrated, our tendency to describe anyone outside the neurotypical norm as being abnormal, pathological or broken in some way is not entirely accurate or fair. Impairment is in the eye of the beholder, and in many cases, we are finding considerable value in the neurodiverse experience.

Indeed, autism is a great example of this. While it can largely be characterized as a social communication disorder, this definition of autism is clearly an expression of the neurotypical bias which, rightly or wrongly, places great value on person-to-person interactions and social conventions; autistics don’t necessarily see this as a problem and are often quite content to focus on their own thoughts and pursuits. Moreover, it’s through the autistic lens that the world can be processed, understood and appreciated in a way that’s qualitatively different than that of the neurotypical mind.

Society benefits from neurodiversity. It carries an intrinsic worth. It’s through “different kinds of thinking” that we get alternative perspectives on the world, and as a result, unique and often astounding forms of expression. Famous autistics, for example, have produced great works of art, scientific theories, and even such unorthodox inventions as cattle calming chutes thanks to Temple Grandin.

Now, as I address the potential for designer psychologies, I am not necessarily saying we should develop technologies to help us all become autistics—but if that’s your cup of tea, then go for it. What I am suggesting is that autistics provide a glimpse into the “other,” that there is huge potential for other cognitive modalities outside of neurotypicality, and that we should consider developing our neurosciences such that we can individually tailor our psychologies in accordance with our values, changing environments, technologies, capacities and social arrangements.

From neurotypicality to neurodiversity

Okay, so why do we need to reach outside the bounds of neurotypicality? What’s so wrong with our brains that we can’t leave well enough alone?

As just mentioned, there is intrinsic personal and societal worth to neurodiversity. But in addition, it is an expression of our cognitive liberty and our right to modify our minds as we see fit. Moreover, it is a way for us to re-jig and improve upon an increasingly outdated piece of equipment, namely our Paleolithic brains.

While there is significant variance within neurotypicality, we tend to agree that it defines a strict band of cognitive traits and normal functioning that, when transgressed, leads to pathology, or in some cases extreme giftedness. Now, while we may think that there is considerable diversity within neurotypicality, it is nothing compared to the potential space of possibilities that exist outside it.

It’s within this small patch of “normalcy” that Homo sapiens, for the most part, currently dwells. We are a species that finds itself in a post-industrialized information era civilization — a situation far removed from the environment in which we evolved over the course of hundreds of thousands of years. It’s for this reason that neurotypicality should also be thought of as Paleoneurology. Our minds are most suited to life in Paleolithic tribal environments; our psychological tendencies are all adaptive traits that have resulted in such characteristics as tribalism, hierarchic social arrangements, the Dunbar limit (maximum number of social connections), cognitive biases, strictly calibrated emotional responses, an aesthetic appreciation for nature, organic forms, and human things.

We obviously don’t live in a Paleolithic society (except in my household), and many of these traits have resulted in a number of maladaptive behaviors, including proclivities for addiction, and modern diseases like depression, obesity, ADHD, and stress. It has resulted in environmental and social alienation, in which we have great difficulty dealing with noise and light pollution, or through social arrangements far removed from tribal and familial settings.

This is a problem that is not getting better anytime soon. In fact, we, as biological creatures, have created a civilization that is increasingly complex and even postbiological. We stand to become even further distanced and alienated from our settings as time passes. Sure, many of the things we have are deliberately designed to please, but that’s also part of the problem, leading to such things as web and video game addiction. And yet other things that surround us are completely outside of intentional design — the products of brute utilitarianism (think concrete slabs, roads, telephone poles, computer code, massive data sets, and so on).

The processing mind

From a certain vantage point the designer psychologies initiative could be seen as a kind of cognitive enhancement, and I don’t really have a huge problem with that. But this is more than just enhancement—it’s more than such things as increased attention spans, intelligence, and memory—traits that can clearly be labeled as improvements.

It’s through designer psychologies that we can strive to be different, and not just better; this is why the neurodiversity movement is so important. It’s about creating alternatives. Consequently, alternative psychologies may actually result in the voluntary onset of impairment — or at least an impairment as viewed through the lens of other modalities. This will be a very tricky area to navigate in terms of the ethics, but it’s a conversation that needs to be had.

And by alternative psychologies I am referring to fundamental changes in the ways our brains perceive and process information. Our brains are largely preconfigured to help us interpret and operate in the world, much like a computer processes information. This computer analogy goes back to the mathematician Claude Shannon who described information processing as the conversion of latent information into manifest information. This is basically the process of having unprocessed or pre-processed information, whether sent out by the environment or through an intentional agent, delivered to a receiver, and having the intended receiver transform, process, and potentially respond and act on that information. Along the way our brains do such things as data filtering and prioritization to help us distinguish signal from noise. We have little to no control over what we think is important, valuable or aesthetically pleasing; these are largely autonomous responses.

How wonderful would it be to recalibrate these information processes according to our needs as a transhuman species. Thankfully we have a good idea as to how we might be able to make this possible.

Back in the 1970s, it was Abraham Moles and Frieder Nake who established and analyzed the links between information processing and aesthetics. They argued that the subjective experience of interpreting incoming data is dependent on the software set up in the brain. Thus, it’s here where we can work to change our seemingly innate preferences.

Cognitive customization and design

So, what kinds of thing would we want to do. What are some examples of designer psychologies?

The space of all possible viable and worthwhile psychologies is absolutely massive. Neurotypicality is but a tiny speck of what’s possible. While it would be impossible for me to predict the various ways in which we might want to alter our cognitive modalities, there are a number of areas we might want to consider:

A. Aesthics

It’s time that we started to adapt our minds to our environment rather than the other way around. A human, transhuman or posthuman mind needs to be able to interpret contemporary things. Consequently, we need to re-think our aesthetic appreciation of those artifacts not traditionally present in the human palate of taste. Consider a world in which we find greater appreciation and deeper worth in everyday objects, mundane tasks, and abstract things (e.g. numbers, patterns and data sets). Or in things we can’t possibly imagine. We would essentially be expanding the space of subjective evaluation and appreciation.

As an aside, this could work in conjunction with the strengthening and weakening of our sensory capacities, and even the deliberate onset of synesthesia (which is the blending and intermingling of sensory experience). These new senses would have to be carefully calibrated to ensure that that (1) the extreme ends of the bandwidth scale are safe and not overwhelming to the receiver and (2) can be appropriately interpreted and reacted upon (i.e. all points of the emotional spectrum, including such things as a sense of disgust or repugnance where such a reaction is warranted).

B. Emotions and mood

Cognitive processing is very closely tied to our emotional responses. We are reflexively drawn to or repulsed by certain things simply because we’re hardwired that way. We could re-design our psychologies such that certain tendencies are strengthened or weakened in ways described earlier.

But emotional response can also refer to our default brain-state, the so-called normal frame of our day-to-day dealings. This is often referred to as our psychological baseline. When we’re below the baseline we’re depressed and when above we’re elevated or even manic. This is an incredibly important area for consideration, especially the prospect of permanently raising the baseline above the default state.

C. Biases

One of the most wonderful, if not sobering pages, in all of Wikipedia is the list of cognitive biases. This page lists over a hundred biases, which are defined as “a pattern of deviation in judgment that occurs in particular situations.” These deviations in judgment are like software bugs in the human mind that are difficult to overcome and often lead to perceptual distortion, inaccurate judgment or illogical interpretation.

Cognitive biases are instances of evolved mental behavior. Some biases were adaptive, for example, because they resulted in more effective actions in given contexts or because they enabled faster decisions when faster decisions were of greater value. Others might be on account of insufficient mental faculties, or from the misapplication of a mechanism that is adaptive under different circumstances. We are very poor at math and probabilistic reasoning, for example, which has given rise to a host of cognitive biases, including those that lead to such behaviors as gambling.

Through designer psychologies, we could alleviate (if not eliminate) the impacts of these biases, which would result in clearer thinking and improved rationality.

D. Social engagement

Some individuals may wish to strengthen the attachments they feel to other persons. Shyness, introversion and inhibitions could be overcome. Drugs like MDMA cause the user to feel closer, more in-tune and empathetic towards others. But these feelings don’t last and there tend to be other side-effects. It’s through designer psychologies that such a modality could be maintained more consistently. The strengthening of our mirror neurons, for example, could make us more capable of considering other minds.

E. Moral enhancement

Which leads to another promising area, that of moral enhancement. Moral enhancement is the speculative study of how we could modify and enhance the ways in which humans act as moral agents.

Morality is clearly a relative term that’s subject to both individual, social and cultural norms, but it’s a fascinating area of inquiry as transhumanists try to figure out the best ways to modify themselves to improve their moral behaviour.

Admittedly, this is a short list of the kinds of mods that may someday be possible. There are likely many other traits, including those factors that we’re still unsure about and how they might impact on personality and conscious awareness. These examples are also all arguably within neurotypical experience. I would imagine that the kind of designer psychologies that will come into existence will be profoundly different than anything we have ever experienced.

Getting there

Up to this point there’s been lot of handwaving on my part to describe the ways in which we could actually tweak our brains to such a degree. Thankfully, there are in fact a number of promising areas that may make the vision of designer psychologies possible.

A. Targeted psychopharmaceuticals

We already have a number of drugs at our disposal that can modify our psychologies in the ways I’ve described, but they often co-incide with impairment, poor judgement, side-effects, and of course, they don’t last. Future pharmaceuticals may be developed that are safer, work with greater cognitive specificity, are customized to the user, and are longer-lasting.

These drugs could work by boosting or alleviating the impacts of existing neurotransmitters, or as novel neurotransmitters altogether. They will impact on hormone levels and other chemical and cellular reactions that impact on human psychology.

B. Adaptive harnessing

Evolutionary neurobiologist Mark Changizi argues that we shouldn’t think about introducing externalities to the human brain, but to rework and re-adapt its mechanisms instead. As an example, he refers to neuronal recycling. In his words, “To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process.” Simply stated, Changizi is suggesting that we work with what we got.

C. Genomics

There’s no question that human psychology has a genetic underpinning. We know that certain traits and tendencies “run in the family.” It will be through the maturation of genomic technologies that we will eventually be able to identify and modify those genetic elements that are responsible for our behavior.

D. Cognitive implants

Cognitive implants are exactly that: assistive devices that can be implanted in the brain. There are a number of promising areas (or soon to be) in development:

  • Brain pacemakers: Brain pacemakers are implants in the brain which send small electric signals to brain tissue, with the results being effective treatments for epilepsy, Parkinson’s and depression. Clearly there’s potential here for using more sophisticated and targeted pacemakers to do much more.
  • Artificial neurons: As opposed to adaptive harnessing, artificial neurons could introduce novel capacities into the brain altogether. While most have their eye on this technology for the purposes of treating neurodegeneration, it could also be used to boost the capacity of the human brain and establish ancillary cognitive systems altogether.
  • Molecular nanotechnology: And of course there’s molecular nanotechnolgy in which all bets are off. Nano, if it can be actualized to the degree we think it can, could radically rework and cyborgize the human brain. All cognitive functions could be altered — everything from the sensory inputs that convert incoming data into brain-readable format, through to all cognitive interactions that cause shifts in mood, perception and emotional response.

Transhuman diversity

Given the incredible possibilities for designer psychologies and the implicit understanding that its adoption will be driven by individual choice, the potential for neurodiversity to explode in the future is astounding.

Designer psychologies will increase diversity and result in greater tolerance for different types of minds. It will vastly increase and expand subjective awareness, resulting in greater potential, creativity, scientific and technological breakthroughs and forms of expression. It will further the cause of cognitive liberties and ultimately result in less suffering as we gain greater control over our mental faculties.

In a society that decries transhumanism and human enhancement as a way to homogenize humans, I hope that the prospect of designer psychologies will show our detractors that the posthuman future will be more diverse and inclusive than anyone can possibly imagine.

Cyborg 2087 (1966) Trailer

Wow, how is it possible that I've gone my entire life without knowing about this movie: Cyborg 2087 (1966)? The plot is surprisingly Terminatoresque:
Garth (Michael Rennie), a cyborg from the future, travels back in time to 1966 to prevent Professor Sigmund Marx (Eduard Franz) from revealing his new discovery, an idea that will make mind control possible and create a tyranny in Garth's time. He is pursued by two "Tracers" (also cyborgs) out to stop him.

Garth enlists the help of Dr. Sharon Mason (Karen Steele), Marx's assistant. He gets her to summon her friend, medical doctor Zeller (Warren Stevens) to operate on him, to remove a homing device used by the Tracers to track him. The local sheriff (Wendell Corey) also becomes involved.

Garth succeeds in defeating the Tracers and convincing Professor Marx to keep his discovery secret. Then, with his future wiped out as a result, Garth ceases to exist; the people who helped him do not even remember him.
Check out the trailer:

May 23, 2011

HuffPo: Will Artificial Intelligence Replace Your Family Doctor?

Sounds like Helene Pavlov, MD, is concerned for her job: From her Huffington Post article, "Will Artificial Intelligence Replace Your Family Doctor?":
How concerned or thrilled should I be that the intelligence designated to my future health care decisions will be potentially limited or artificial?

What does the practice of medicine mean? What makes a "good doctor?" What is it you want in your physician? For me, I want him/her to listen to my complaints/concerns. Not all patients know what symptoms to prioritize or what signs and symptoms might be significant or related. Personally, I need a physician to ask appropriate and sometimes probing questions. For instance, a complaint of being tired and unable to sleep should prompt questions such as are you going to the bathroom all the time? This additional information might mean the difference between getting a B12 shot or being evaluated and treated for diabetes or a prostate condition.

In the past, a good physician knew his/her patients. He/she was there at your birth and then at the birth of your children -- the Marcus Welby, M.D. or Dr. Kildare who did it all. As medicine and science and technology evolved, the medical specialist was born. That was actually necessary given the accelerated influx of information and research discoveries. To know all areas of medicine thoroughly is virtually impossible. The problem with the specialist scenario, however, is making sure you are going to the "right" one. It is more than a matter of competence. Specialists tend to be very focused and may "listen" only to those symptoms/signs relative to their specialty and assume some other specialist is dealing with "everything else." In most instances, triage from an astute general internist or primary physician or other health care provider is required.
A couple of quick comments:
  1. It's more likely than not that future doctors will use AI expert systems to assist in their diagnoses and not replace them altogether
  2. The qualitative way in which a doctor and a Watson-like system will make their diagnoses are one in the same. There's nothing inherently special about human data processing and decision making relative to what a future medical expert system will do.

May 22, 2011

Chris Chatham: 10 Important Differences Between Brains and Computers

Chris Chatham has penned an article in which he outlines 10 important differences between brains and computers. It's an interesting post and I recommend that you give it a read. Here's a quick overview:
  1. Brains are analogue; computers are digital
  2. Computers access information in memory by polling a memory address, brains search memories using cues
  3. The brain is a massively parallel machine; computers are modular and serial
  4. Processing speed is not fixed in the brain; there is no system clock
  5. Short-term memory is not like RAM
  6. Computers are hardware that runs software, there is no “mind software” running on brains
  7. Synapses are far more complex (electrochemical) than computer logic gates (electrical)
  8. Computers use processors and memory for different functions, there is no such distinction in the brain
  9. Computers are designed, built and are of fixed architecture, the brain is a self-organizing system
  10. Computers have no body, brains do
Okay, fair enough. But there's something deeply unsatisfying about Chatham's distinctions. It's an apples and oranges kind of thing in which we're still largely talking about fruit.

Sure, computers today can't really be compared to human brains for exactly the reasons Chatham outlines. Computers don't create minds in the ways that brains do (at least not yet). But it's folly to suggest that brains aren't a kind of computer—a computer for the lack of a better word that we haven't quite figured out yet. And it's very likely that, as our information technologies and artificial intelligence theories progress, our computers will increasingly come to resemble human brains.

It's still computation, after all. The functionalist approach to cognition suggests that the brain is likely churning away just like any other Turing Machine; it's still adhering to the Church-Turing theory of computational universality. It just happens to crunch numbers exceptionally well with meat and produce this remarkable thing we call mind.

Sprechen sie dolphin?

A new approach combining marine biology with artificial intelligence reveals that we may be be closer to conversing with dolphins than we think. Denise Herzing of the Wild Dolphin Project in Florida, and Thad Starner, an artificial intelligence expert at Georgia Institute of Technology in Atlanta, are working on a project called CHAT (Cetacean Hearing and Telemetry project) that may finally allow for our two species to communicate:
Starner and some of his students are now developing a smartphone-sized computer, that will be worn across a diver's chest in a waterproof case. The device will be connected to two hydrophones, capable of picking up dolphin sounds underwater - including those beyond the range of human hearing. As it can be difficult for humans to identify the source of underwater sounds, an arrangement of LED lights within the diver's mask will indicate the direction from which the various clicks and squeals are originating.

Not only will the computer hopefully be able to decode and indicate what the dolphins are saying, but by using a handheld Twiddler (sort of a combination mouse and keyboard), the diver will be able to select and send out audible dolphin-ese responses.

Before any interspecies conversations can take place, however, the team first needs to figure out the animals' vocabulary. In order to do so, they plan on running recordings of dolphin vocalizations through a pattern detection algorithm, designed by Starner. The system analyzes data, picks out deviations from the norm, then groups similar deviations together. It is hoped that by observing dolphins, and seeing which recurring deviant sounds accompany which behaviors and situations, the researchers will be able to identify specific "fundamental units" of dolphin speech.

Catherine Mayer: Live long. Stay healthy. Join the immortals

The seven ages of man are a thing of the past, says Catherine Mayer, and we're never too old to find a new lover, start a business or even have a baby. Now, we're ready for anything–except death. Welcome to what Mayer calls the world of "amortality."
The problem is that not even scientists can agree on the causes of ageing or the possibilities of an antidote. A majority of mainstream scientists are pessimistic about the possibilities for unabated life extension. The world's verifiably longest-lived person, Jeanne Calment, died at 122 in 1997, and that may be close to the edge of the possible human span. But there have been flurries of excitement around discoveries that seem to hold out the promise of slowing ageing. Last November, Nature magazine published a study showing that mice with suppressed production of an enzyme called telomerase aged swiftly but could be rejuvenated if the telomerase supply was restored. And there is a chorus of dissenting voices promising that if we work out ways to live long enough, we'll be able to live for ever. Ray Kurzweil, for example, puts his faith in nanotechnology, the development of machines tinier than atoms that could be deployed in the human body to repair the ravages of time. Kurzweil's impressive record as an inventor (he developed the first flatbed scanners, optical character-recognition software, print-to-speech and speech-recognition technologies, as well as making fine keyboards found in many music studios), together with his unnerving habit of issuing outlandish predictions that later prove true, mean only the foolhardy would dismiss his forecasts out of hand.

He has signed up to have his head cryonically frozen after death, envisaging resuscitation in a more technologically advanced future, but he's not "super-enthusiastic" about refrigeration; it is, he says, a back-up plan. He is perched on a sofa in his office in Wellesley, near Boston, surrounded by awards, posters for two films centred on his transhumanist ideas, photos with people even more successful than he. "I have enough trouble pursuing my interests while I'm alive and kicking," he says. "It's hard to imagine doing that when you're frozen, but proponents of it say it's better than the alternative. Really, my plan is to avoid dying, I think that's the best approach."

For Kurzweil – 62 at the time of the interview last year, "biologically more like 41" – that effort involves a Spartan diet, exercise and handfuls of vitamins and around 150 supplements daily. Many amortals can't be bothered to put the work into staying vibrant, trusting instead to boffins like Kurzweil to deliver us from the clutches of our own biology.

Unfortunately there's no firm evidence that they will do so, even in our longer lifetimes. Amortals may be assailed by depression or left unprepared when the gap between our ageless sense of self and the reality of ageing yawns. I would urge my family and friends to drink elixirs or open their veins to restorative swarms of nanobots if I thought that would grant them even a few additional years. Instead I cling to the hope that by eating well and taking exercise, engaging and being engaged, they will at the very least challenge Jeanne Calment's record for longevity.
Catherine Mayer's Amortality: The Pleasures and Perils of Living Agelessly was published on 12 May 2011 by Vermilion.

May 21, 2011

Boston Globe sneaks a peek into the deep future

The Boston Globe asks: "What will happen to us?" To answer the question, writer Graeme Wood highlights the work of futurists Nick Bostrom, Sir Martin Rees, Sean Carroll and Ray Kurzweil. Highlights:
The community of thinkers on distant-future questions stretches across disciplinary bounds, with the primary uniting trait a willingness to think about the future as a topic for objective study, rather than a space for idle speculation or science fictional reverie. They include theoretical cosmologists like Sean Carroll of the California Institute of Technology, who recently wrote a book about time, and nonacademic technology mavens like Ray Kurzweil, the precocious inventor and theorist. What binds this group together is that they are not, says Bostrom, “just trying to tell an interesting story.” Instead, they aim for precision. In its fundamentals, Carroll points out, the universe is a “relatively simple system,” compared, say, to a chaotic system like a human body — and thus “predicting the future is actually a feasible task,” even “for ridiculously long time periods.”
Also among the cosmologists is Rees, the speaker at the Royal Institution, who turned his attention to the end of time after a career in physics reckoning with time’s beginning. An understanding of these vast time scales, he contends, should have a large and humbling effect on our predictions about human evolution. “It’s hard to think of humans as anything like the culmination of life,” Rees says. “We should expect humans to change, just as Darwin did when he wrote that ‘no living species will preserve its unaltered likeness into a distant futurity.’ ” Most probably, according to Rees, the most important transformations of the species will be nonbiological. “Evolution in the future won’t be determined by natural selection, but by technology,” he says — both because we have gone some distance toward mastering our biological weaknesses, and because computing power has sped up to a rate where the line between human and computer blurs. (Some thinkers call the point when technology reaches this literally unthinkable level of advancement the “singularity,” a coinage by science fiction writer Vernor Vinge.)
---Bostrom, the Oxford philosopher, puts the odds at about 25 percent, and says that many of the greatest risks for human survival are ones that could play themselves out within the scope of current human lifetimes. “The next hundred years or so might be critical for humanity,” Bostrom says, listing as possible threats the usual apocalyptic litany of nuclear annihilation, man-made or natural viruses and bacteria, or other technological threats, such as microscopic machines, or nanobots, that run amok and kill us all.

This is quite literally the stuff of Michael Crichton novels. Thinkers about the future deal constantly with those who dismiss their speculation as science fiction. But Bostrom, who trained in neuroscience and cosmology as well as philosophy, says he’s mining the study of the future for guidance on how we should prioritize our actions today. “I’m ultimately interested in finding out what we have most reason to do now, to make the world better in some way,” he says.
There is, both in Bostrom’s scenarios and in Rees’s, the possibility of a long and bright future, should we manage to have any future at all. Some of the key technologies capable of going awry also have the potential to keep us alive and prospering — making humans and post-humans a more durable species. Bostrom imagines that certain advances that are currently theoretical could combine to free us some of the more fragile aspects of our nature, such as the ability to be wiped out by a simple virus, and keep the species around indefinitely. If neuropsychologists learn to manipulate the brain with precision, we could drug ourselves into conditions of not only enhanced happiness but enhanced morality as well, aiming for less fragile or violent societies far more durable than we enjoy now, in the nuclear shadow.

And if human minds could be uploaded onto computers, for example, a smallpox plague wouldn’t be so worrisome (though maybe a computer-virus outbreak, or a spilled pot of coffee, would be). Not having a body means not being subject to time’s ravages on human flesh. “When we have friendly superintelligent machines, or space colonization, it would be easy to see how we might continue for billions of years,” Bostrom said, far beyond the moment when Rees’s post-human would sit back in his futuristic lawn chair, pop open a cold one, and watch the sun run out of fuel.

There is one surprising survival scenario of particular worry for Bostrom, however — one that involves not a physical death but a moral one. The technologies that might liberate us from the threat of extinction might also change humans not into post-humans, but into creatures who have shed their humanity altogether. Imagine, he suggests, that the hypothetical future entities (evolved biologically, or uploaded to computers and enhanced by machine intelligence) have slowly eroded their human characteristics. The mental properties and concerns of these creatures might be unrecognizable.

“What gives humans value is not their physical substance, but that we are thinking, feeling beings that have plans and relationships with others, and enjoy art, et cetera,” Bostrom says. “So there could be profound transformations that wouldn’t destroy value and might allow the creation of any greater value” by having a deeper capacity to love or to appreciate art than we present humans do. “But you could also imagine beings that were intelligent or efficient, but that don’t add value to the world, maybe because they didn’t have subjective experience.”

Bostrom ranks this possibility among the more likely ways mankind could extinguish itself. It is certainly the most insidious. And it could happen any number of ways: with a network of uploaded humans that essentially abolishes the individual, making her a barely distinguishable module in a larger intelligence. Or, in a sort of post-human Marxist dystopia, humans could find themselves dragooned into soulless ultra-efficiency, without all the wasteful acts of friendship and artistic creation that made life worth living when we were merely human.

“That would count as a catastrophe,” Bostrom notes.

New Scientist asks: When should we give rights to robots?

From the New Scientist article, "When should we give rights to robots?":
A more basic issue is that there is no agreed definition of consciousness. Perhaps in practical terms, a simpler answer to the question of machine rights might come from the way people treat them. We should put our faith in our own ability to detect consciousness, rather than look to philosophical discourse.

There is one obvious shortcoming of this approach: we will probably sense sentience before it is truly deserved because of our remarkable tendency to anthropomorphise. After all, we are already smitten by today's relatively dumb robots. Some dress up their robot vacuum cleaners. Others take robots fishing or go so far as to mourn their loss on the battlefield.

Even so, popular sentiment towards machines and robots will give a vivid feel for the degree of their sophistication. Franklin himself admits that even he referred to his original creation as "she", though he "did not feel at all bad" when he turned "her" off. But when he and a significant number of others do feel a pang of guilt as they flick the off switch, we might well have passed a milestone in artificial cognition: the birth of a machine that deserves rights.

May 20, 2011

Humanity+ @ Parsons recap: How to Live Forever review

This past weekend at the Humanity+ @ Parsons conference in NYC I had a chance to attend the debut screening of Mark Wexler’s new documentary, How to Live Forever. The film chronicles Wexler’s struggle to come to grips with his mother’s recent death and his ensuing existential crisis. To help cope with his newfound dread, Wexler ventures down a number of paths that might help him achieve a longer life. To this end, he interviews centenarians, gerontologists, health and fitness gurus, anti-aging hucksters, and anyone else with an opinion on how to extend life spans.

Of interest to the transhumanist and radical life extension communities, Wexler talks to Aubrey de Grey, Ray Kurzweil, and Tanya Jones of Alcor. But in addition to this he is lectured on hormone therapy by Suzanne Summers, takes a stab at caloric restriction, and visits with elderly Okinawans in Japan. Importantly, he explores and treats each issue with a certain seriousness—tongue just so slightly in cheek—giving each person or approach its due consideration. And by doing so, he brings the viewer into each world in an entertaining way and and then let’s them make their own minds on the efficacy of each approach.

That said, the central thrust of the documentary is rather weak; Wexler’s struggle is clearly contrived, uninteresting and underdeveloped. Thankfully it’s the characters and insights into aging that give this film its spark. Every segment, location and person that’s explored by Wexler is a little gem that offers insights into both life extension practices and novel approaches to living a healthy life. Wexler offers some wonderful food for thought by juxtaposing a caloric restriction advocate with a glutenous food critic, by visiting a nursing home in which robots are used to comfort the elderly, and by highlighting the fact that the world’s oldest woman on record smoked, drank and lived alone until her dying day.

In the end, the film offers no real solutions. Its life-affirming insights, many of which are provided by Wexler’s best friend, are pedestrian and unsatisfying. The final shot of the documentary shows Wexler sifting through his dead mother’s paintings, as if to suggest that she lives on through her work. But as Woody Allen once coyly noted, the key to achieving true immortality is by not dying in the first place.

How to Live Forever is a wonderful introduction to the sub-cultures that are a part of life extension, but it skirts past some of the deeper philosophical and ethical issues that are integral part of the larger discussion. The result is a quaint but highly enjoyable film. Those looking for something more analytical, profound or scientific, however, will need to look elsewhere.

Ed Boyden on optogenetics and neural prosthetics [TED]

Neuroscientist Ed Boyden shows how, by inserting genes for light-sensitive proteins into brain cells, he can selectively activate or de-activate specific neurons with fiber-optic implants. With this unprecedented level of control, he's managed to cure mice of analogs of PTSD and certain forms of blindness. And on the horizon: neural prosthetics.

May 19, 2011

Humanity+ @ Parsons recap: Posthumanism and posthumanism

I knew this would happen eventually, and it finally did at the recently concluded Humanity+ @ Parsons conference in NYC: mass confusion over the term "posthumanism."

You see, there are actually three legitimate but subtly different definitions of the term. And at the Parsons conference, an event that brought designers and transhumanists together, this created an interesting problem that resulted in consistent misinterpretation and misunderstanding. Not to mention a wide differing of opinion.

For most transhumanists, posthumanism is the general idea that we should strive to become posthuman, namely human beings who have been augmented and modified to such a degree that they can no longer be classified as such. A posthuman could be a hyper-genetically modified person, a cyborg, or even a completely non-corporeal uploaded consciousness.

The roots of transhumanist thinking come from the Enlightenment era and is very much informed by secular Humanism. A general premise that drives the quest for a posthuman condition is that steady and significant progress is attainable through the application of science and reason and that we ought to take a human-centric approach to our endeavors (i.e. "If we don't play God, who will?"). And it's not enough to work towards social, political and institutional reform, argue the transhumanists, we should also work to modify and improve the human mind and body itself.

But to many in the design community and European academia (excluding Nick Bostrom's crew at Oxford), the term has its roots in postmodernist thinking. Its focus is more conceptual than practical, more external than internal. Also known as philosophical posthumanism, it is an area of inqiry that is concerned with the blurring lines between the human body and its environment and how our external tools have become extensions of our selves. Posthumanists in this context are interested in exosomatic possibilities, such as extended selves and remote presence. They tend to argue that the skin barrier is an increasingly poor dividing line for determining where the human begins and ends. For more on this approach, I recommend The Posthuman Condition: Consciousness beyond the Brain by Robert Pepperell (2003). And be sure to read my review.

In addition to this, there is an ancillary school of thought which suggests that posthumanism implies post-Humanism (as in "after Humanism"). This is the suggestion that humans have no inherent rights to destroy nature or ethically set themselves above it. Human knowledge is also reduced to a less controlling position, which tends to be seen as the defining aspect of the world. These posthumanists admit limitations and fallibility of human intelligence, even though they do not suggest abandoning the rational tradition of humanism.

While I don't necessarily agree with this line of thinking, I do find the idea of human primacy a bit outdated in consideration of human enhancement, the presence of nonhuman animal persons, and the potential for artificially intelligence persons. But I don't agree that Humanism implies human domination over nature or an indifference to it.

At any rate, these are all very different approaches to posthumanism. Transhumanist posthumans are concerned with the internal world as they look to modify minds and bodies, often in relation to a complexifying and changing environment. Academic/philosophical posthumanists, on the other hand, are interested in identity and interaction – and they're certainly not too keen on the human-redesign front.

Transhumanists definitely faced some criticism from these folks at the Parson's conference. This differing of opinion and perspective resulted in some interesting, provocative and at some times heated moments. But it was all good as it provided some needed passion and contradiction into an otherwise consensual and agreeable gathering of minds and ideas.

May 18, 2011

Humanity+ @ Parsons recap: Beyond enhancement

Just got back from New York City where I attended the Humanity+ @ Parsons conference on May 14th and 15th. I always have a great time at these events, and this conference was no exception.

I'll be writing about the conference over the coming days and weeks, but I will say that it was interesting to see all the emphasis paid not to enhancement per se, but to alternative forms of human re-design and modification. Kinda makes sense if you think about it: it was a design-meets-transhumanism conference after all. But that said, I'm left wondering if it's part of a broader trend.

Transhumanists, it would seem, are not as purely fixated on augmentation as they used to be; it’s becoming more than just about being smarter, faster, or stronger. It’s also about acquiring novel capacities and being able to experience different things.

One thing I did observe, however, was that it was the transhumanists and not the designers who emphasized these points. I am surprised at how little consideration designers, architects and artists still give to the idea of human re-engineering. They're still largely fixated on externalities—things interface design, user experience, and environmental factors.

Now, there's nothing necessarily wrong with these things, but we need to also consider making meaningful alterations to the human body and mind as well. As I said during my talk on designer psychologies, it's time to start changing our minds and bodies to suit our environment and technologies rather than the other way around.

Fundamentally, a lot of this reluctance (or just sheer ignorance) has to do with the design community's adoption of an academic posthumanism that's rooted in postmodernist thinking (I will elaborate on this in a future post). This is contrasted with the transhumanist take on posthumanism which is driven by secular Humanist and Enlightenment ideals.

So, as noted, a number of transhumanists addressed the issue of human modification and re-design outside the context of mere enhancement.

Artificial intelligence theorist Ben Goertzel argued that, as we work to create AGI (artificial general intelligence), we'll have to create minds that can interpret and navigate through specific modal environments. Goertzel was addressing synthetic minds, but his point could be applied to humans as well. It made me wonder if we will someday be able to significantly modify human experience as it relates to environmental context.

Neuroscientist Anders Sandberg talked about the advent of novel capacities (such as new senses) that have no objective or easily distinguishable purpose. He gave the example of Todd Huffman's magnetic fingers which allow him to sense magnetic fields. Sandberg likened this to the body modification community. Modification can be done strictly for the sake of it, or just for personal experimentation. Sometimes it’s worth trying something weird or different just to see what happens; there isn't necessarily a problem to be solved. And at the very least it provides a fascinating outlet for human creativity and expression.

Similarly, bio-artist Adam Zaretsky made the claim that we should be more adventurous and imaginative when it comes to augmentation. While his ethics were at times suspicious (he seemed to believe that we can modify and hybridize nonhuman animals indiscriminately), his argument that we should think of biology as both our medium and canvas struck a few chords with conference attendees. Zaretsky's flesh fetish and resultant shock art showed that the potential for out-of-the-box modifications is significant and bizarre, but that it can only be explored given more daring (and an apparent love of icky things). He put it aptly when he said, "Humanity is nature in drag."

Bioethicist James Hughes had a unique take on things with his talk on building resilient minds. While I would agree that this could be classified as a kind of enhancement, the types of cognitive changes that he talked about were fairly fungible and context specific. It seemed more alt-transhumanism to me when compared to traditional discussions about increased memories, enhanced intelligence, and so on. Perhaps Hughes's most interesting suggestion was that we should be able to alter our brain state to match our situation or predicament; we would essentially be changing our natures on the fly in order to cope and adapt. Very post 9/11 transhumanism.

And as for my talk on designer psychologies, I basically argued in favour of creating alternative minds. By using autism as an example, I demonstrated that there is tremendous value and potential through increased neurodiversity, and that we, as neurotypicals, need to be careful about labeling these different kinds of thinking as being pathological. While I agree that some conditions are worthy of such distinctions, we need to be open minded to the possibility that alternative psychologies have an intrinsic value that can yield novel experiences and, as a result, create entirely new expressions, insights and experience (I'll publish my entire talk a bit later).

Now, as the transhumanist diehards are inclined to remind me, much of this isn’t really anything new. Transhumanists have been talking about body modification, alternative minds and novel capacities since day one. But it was nice to see such consensus at the same conference—a strong indication that these ideas are gaining currency and becoming a larger part of the conversation. It’s good to see more lateral thinking when it comes to considering new capacities and the motives behind our desires to reshape the human condition.