April 30, 2009

Guest blogger David Pearce answers your questions (part 3)

David Pearce is guest blogging this week.

Michael Kirkland alleges that the abolitionist project violates the Golden Rule.

Maybe. But recall J.S. Mill: “In the golden rule of Jesus of Nazareth, we read the complete sprit of the ethics of utility.” The abolitionist project, and indeed superhappiness on a cosmic scale, follows straightforwardly from application of the principle of utility in a postgenomic era. Yes, there are problems in interpreting an ethic of reciprocity in the case of non-human animals, small children, and the severely mentally handicapped. But the pleasure-pain axis seems to be common to the vertebrate line and beyond.

Michael also believes that my ideas about predators are "monstrous" and "genocidal".

OK, here is a thought-experiment. Imagine if there were a Predator species that treated humans in the way cats treat mice. Let's assume that the Predator is sleek, beautiful, elegant, and is endowed with all the virtues we ascribe to cats. In common with cats, the Predator simply doesn't understand the implications of what it is doing when tormenting and killing us in virtue of its defective theory of mind. What are our policy options? We might decide to wipe out the race of Predators altogether - "genocide" so to speak - from the conviction that a race of baby-killers was no more valuable than the smallpox virus. Or alternatively, we might be more forgiving and tweak its genome, reprogramming the Predator so it no longer preyed on us (or anyone else). However, perhaps one group of traditionally-minded humans decide to protest against adopting even the second, "humane" option. Tweaking its genome so the Predator no longer preys on humans will destroy a vital part of its species essence, the ecological naturalists claim. Curbing its killer instincts would be "unnatural" and have dangerously unpredictable effects on a finely-balanced global ecosystem (etc). Perhaps a few extremists even favour a "rewilding" option in which the Predator should be reintroduced to human habitats where it is now extinct.

Should we take such an ethic seriously? I hope not.

Granted, the above analogy sounds fantastical. But the parallel isn't wholly far-fetched. If your child in Africa has just been mauled to death by a lion, you'll be less enthusiastic about the noble King Of The Beasts than a Western armchair wildlife enthusiast. A living world based around creatures eating each is barbaric; and later this century it's going to become optional. One reason we're not unduly troubled by the cruelties of Darwinian life is that our wildlife "documentaries" give a Disneyfied view of Nature, with their tasteful soundtracks and soothing commentary to match. Unlike on TV news, the commentator never says: "some of the pictures are too shocking to be shown here". But the reality is often grisly in ways that exceed our imagination.

In response to Leafy: Are cats that don't kill really "post-felines" - not really cats at all? Maybe not, but only in a benign sense. A similar relationship may hold between humans and the (post-)humans we are destined to become.

Carl worries that I might hold a "[David] Chalmers-style 'supernatural' account of consciousness".

We differ, but like Chalmers, I promise I am a scientific naturalist. The behaviour of the stuff of the world is exhaustively described by the universal Schrodinger equation (or its relativistic generalization). This rules out dualism (casual closure) or epiphenomenalism (epiphenomenal qualia would lack the causal efficacy to talk about their own existence). But theoretical physics is completely silent on the intrinsic nature of the stuff of the world; physics describes only its formal structure. There is still the question of what "breathes fire into the equations and makes a universe for them to describe", in Hawking's immortal phase; or alternatively, in John Wheeler's metaphor, "What makes the Universe fly?"

Of the remaining options, monistic idealism is desperately implausible and monistic materialism is demonstrably false (i.e. one isn't a zombie.) So IMO the naturalist must choose the desperately implausible option. See Galen Strawson [Consciousness and Its Place in Nature: Does physicalism entail panpsychism? (2006)] for a defence of an ontology he'd hitherto dismissed as "crazy"; and Oxford philosopher Michael Lockwood [Mind, Brain and the Quantum (1991)] on how one's own mind/brain gives one privileged access to the intrinsic nature of the "fire in the equations".

I think two distinct problems of consciousness need to be distinguished here. A solution to the second presupposes a solution to the first.

First, why does consciousness exist in the natural world at all? This question confronts the materialist with the insoluble "explanatory gap". But for the monistic idealist who thinks fields of microqualia are all that exist, this question is strictly equivalent to explaining why anything at all exists. Mysteries should not be multiplied beyond necessity.

Secondly, why is it that, say, an ant colony or the population of China or (I'd argue) a digital computer - with its classical serial architecture and "von Neumann bottleneck" - don't support a unitary consciousness beyond the aggregate consciousness of its individual constituents, whereas a hundred billion (apparently) discrete but functionally interconnected nerve cells of a waking/dreaming vertebrate CNS can generate a unitary experiential field? I'd argue that it's the functionally unique valence properties of the carbon atom that generate the macromolecular structures needed for unitary conscious mind from the primordial quantum minddust. No, I don't buy the Hameroff Penrose "Orch-OR" model of consciousness. But I do think Hameroff is right insofar as the mechanism of general anaesthesia offers a clue to the generation of unitary conscious mind via quantum coherence. The challenge is to show a) how ultra-rapid thermally-induced decoherence can be avoided in an environment as warm and noisy as the mind/brain; and 2) how to "read off" the values of our qualia from the values of the solutions of the quantum mechanical formalism.

Proverbially, the dominant technology of an age supplies it root metaphor of mind. Our dominant technology is the digital computer. IMO our next root metaphor is going to be the quantum computer - our new root metaphor both of conscious mind and the multiverse itself. But if your heart sinks when you read the title of any book - or blog post - containing the words "quantum" and "consciousness", then I promise mine does too.

David Pearce
dave@hedweb.com
http://www.hedweb.com/

April 29, 2009

Much ado about swine flu

Well, mark another successful prognostication by yours truly: Back on November 6 I correctly predicted that Barack Obama would likely have to manage a pandemic at some point during his presidency. Little did I realize that it would happen so quickly, arriving at the 100 day mark of his administration.

But thankfully it is far less severe than what I had imagined (assuming, of course, that there won't be another). Yes, the swine flu is now poised to sweep through the human population, but it's a relatively mild bit of nastiness. The media, I believe, is largely responsible for overhyping the situation.

It's worth noting that on any given day about 100 Americans and a dozen Canadians die from the regular flu. Those who die from it are typically the elderly or those with pre-existing conditions. At the time of this writing it's largely unclear as to how many people have died from the swine flu in total. Some reports say over 150, while others claim no more than 7. It's clear that this ain't ebola.

Where there should be concern, however, is that the swine flu, due to its epidemic rate of infection, is set to hit more people than the usual strain. This will undoubtedly result in more deaths than usual, so in this sense it's a serious problem. But as for a the apocalypticism that's being perpetuated right now, let's not get carried away.

Now, that said, I'm also concerned that the strain will have a heightened chance of genetic mutation due to the dramatic increase in its propagation rate. This is a concern because it could quickly evolve (either alone or in conjunction with another virus) as something far, far worse. It's for this reason that the WHO and other institutions (including world governments) should keep a close eye on this bug and the situation.

Lastly, to all the conspiracists who claim that this is a man-made virus meant to revitalize the economy, grab a life and lose the paranoia.

[Be sure to read Michael Anissimov's take on the matter: "Following the flu and catastrophic risk in general"]

Bailey: Transhumanism and the Limits of Democracy

Reason Online's science correspondant Ronald Bailey has published a paper he presented at the Arizona State University's Center for the Study of Religion and Conflict Workshop on Transhumanism and the Future of Democracy last week.

The workshop addressed such questions like, how does the enhancement of human beings through biotechnology, information technology, and applied cognitive sciences affect our understandings of autonomy, personhood, responsibility and free will? And how much and what type of societal control should be exercised over the use of enhancement technologies?

In his paper, Bailey argues that a number of democratic transhumanists, including James Hughes, have "fetishized" democratic decision-making over the protection of minority rights. Instead, argues Bailey, transhumanism should be accepted as a reasonable comprehensive doctrine that should be tolerated in liberal societies by those who disagree with its goals.

Bailey, who is one of the movement's most vociferous advocates (although I doubt he'd refer to it as a "movement"), is largely arguing on behalf of the libertarian perspective. What he describes as 'democracy' in this context is any kind of collective or institutional interference against what he considers to be our civil liberties. In other words, Bailey feels that morphological, cognitive and reproductive liberties need to be protected against the reactionary masses and bureaucratic interference. "Technologies dealing with birth, death, and the meaning of life need protection from meddling—even democratic meddling—by those who want to control them as a way to force their visions of right and wrong on the rest of us," writes Bailey, "One's fellow citizens shouldn't get to vote on with whom you have sex, what recreational drugs you ingest, what you read and watch on TV and so forth."

In addition, Bailey illustrates the problems of democratic authoritarianism by detailing some of the history of legal interference with reproductive rights. He also analyzes the various arguments used by opponents of human enhancement which they hope will sway a majority into essentially outlawing the transhumanist enterprise.

Read the entire article.

Guest blogger David Pearce answers your questions (part 2)

David Pearce is guest blogging this week.

Here are two more replies in response to questions about my Abolitionist Project article from earlier this week.

Carl makes an important point: "Why think that affective gradients are necessary for motivation at all? Consider minds that operate with formal utility functions instead of reinforcement learning. Humans are often directly motivated to act independently of pleasure and pain."

Imagine if we could find a functionally adequate substitute for the signaling role of negative affect - a bland term that hides a multitude of horrors - and replace its nastiness with formal utility functions. Why must organic robots like us experience the awful textures of physical pain, depression and malaise, while our silicon robots function well without them? True, most people regard life's heartaches as a price worth paying for life's joys. We wouldn't want to become zombies.

But what if it were feasible to "zombify" the nasty side of life completely while amplifying all the good bits - perhaps so we become "cyborg buddhas".

More radically, if the signaling role of affect proves dispensable altogether, it might be feasible computationally to offload everything mundane onto smart prostheses - and instead enjoy sublime states of bliss every moment of our lives, without any hedonic dips at all. I say more on this theme in my reply to "Wouldn't a permanent maximum of bliss be better?" I need scarcely add this is pure speculation.

Leafy asks me to comment on an "animal welfare state, and [...] how your views about the treatment of nonhuman animals (e.g., that animals need care and protection, not liberation, and when animal use or domination might be morally acceptable) differ from those of people such as Singer and Francione".

First, let's deal with an obvious question. Millions of human infants die needlessly and prematurely in the Third World each year. Shouldn't we devote all our energies to helping members of our own species first? To the extent humans suffer more than non-humans, I'd answer: yes - though rationalists should take extraordinary pains to guard against anthropocentric bias. Critically, there is no evidence that domestic, farm or wild mammals are any less sentient than human infants and toddlers. If so, we should treat their well-being impartially. A critic will respond here that human infants have moral priority because they have the potential to become full-grown adults - with the moral primacy that we claim. But we wouldn't judge a toddler with a terminal disease who will never grow up to deserve any less love and care than a healthy youngster. Likewise, the fact that a dog or a chimpanzee or a pig will never surpass the intellectual accomplishments of a three year old child is no reason to let them suffer more. Thus I think it's admirable that we spend a hundred thousand dollars trying to save the life of a 23 week old extremely premature baby; but it's incongruous that we butcher and eat billions of more sophisticated sentient beings each day. Actually, IMO words can't adequately convey the horror of what we're doing in factory farms and slaughterhouses. Self-protectively, I try and shut it out most of the time. After all, my intuitions reassure me, they're only animals, what's going on right now can't really be as bad as I believe it to be. Yet I'm also uncomfortably aware this is moral and intellectual cowardice.

Is a comprehensive welfare system for non-human animals technically feasible? Yes. The implications of an exponential growth of computing power for the biosphere are exceedingly counterintuitive. See, for example, The Singularity Institute or Ray Kurzweil -- though I'm slightly more cautious about timescales. In any event, by the end of the century we should have the computational resources to micromanage an entire planetary ecosystem. Whether we use those computational resources systematically to promote the well-being of all sentient life in that kind of timeframe is presumably unlikely. However, we already - and without the benefit of quantum supercomputers - humanely employ, for example, depot-contraception rather than culling to control the population numbers of elephants in some overcrowded African national parks. Admittedly, ecosystem redesign is only in its infancy; and we've barely begun to use genetic engineering, let alone genomic rewrites. But if our value system dictates, then we could use nanobots to go to the furthest ends of the Earth and the deep oceans and eradicate the molecular signature of unpleasant experience wherever it is found. Likewise, we could do the same to the genetic code that spawns it. In any case, for better or worse, by the mid-century large terrestrial mammals are unlikely to survive outside our "wildlife" reserves simply in virtue of habitat destruction. How much suffering we permit in these reserves is up to us.

Gary Francione and Peter Singer? Despite their different perspectives, I admire them both. As an ethical utilitarian rather than a rights theorist, I'm probably closer to Peter Singer. But IMO a utilitarian ethic dictates that factory-farmed animals don't just need "liberating", they need to be cared for. Non-human animals in the wild simply aren't smart enough to adequately look after themselves in times or drought or famine or pestilence, for instance, any more than are human toddlers and infants, and any more than were adult members of Homo sapiens before the advent of modern scientific medicine, general anaesthesia, and painkilling drugs. [actually, until humanity conquers ageing and masters the technologies needed reliably to modulate mood and emotion, this control will be woefully incomplete.]

At the risk of over-generalising, we have double standards: an implicit notion of "natural" versus "unnatural" suffering. One form of suffering is intuitively morally acceptable, albeit tragic; the other is intuitively morally wrong. Thus we reckon someone who lets their pet dog starve to death or die of thirst should be prosecuted for animal cruelty. But an equal intensity of suffering is re-enacted in Mother Nature every day on an epic scale. It's not (yet) anybody's "fault." But as our control over Nature increases, so does our complicity in the suffering of Darwinian life "red in tooth and claw". So IMO we will shortly be ethically obliged to "interfere" [intervene] and prevent that suffering, just as we now intervene to protect the weak, the sick and the vulnerable in human society.

But here comes the real psychological stumbling-block. One of the more counterintuitive implications of applying a compassionate utilitarian ethic in an era of biotechnology is our obligation to reprogram and/or phase out predators. In the future, I think a lot of thoughtful people will be relaxed about phasing out/reprogramming, say, snakes or sharks. But over the years, I've received a fair bit of hate-mail from cat-lovers who think that I want to kill their adorable pets. Naturally, I don't: I'd just like to see members of the cat family reprogrammed [or perhaps "uplifted" so they don't cause suffering to their prey. As it happens, I've only once witnessed a cat "playing" with a tormented mouse. It was quite horrific. Needless to say, the cat was no more morally culpable than a teenager playing violent videogames, despite the suffering it was inflicting. But I've not been able to enjoy watching a Tom-and-Jerry cartoon since. Of course the cat's victim was only a mouse. Its pain and terror were probably no worse than mine the last time I caught my fingers in the door. But IMO a sufficiently Godlike superintelligence won't tolerate even a pinprick's worth of pain in post-human paradise. And (demi)gods, at least, is what I predict we're going to become...

David Pearce
dave@hedweb.com
http://www.hedweb.com/

April 28, 2009

Guest blogger David Pearce answers your questions

David Pearce is guest blogging this week

In yesterday's post I promised I'd try to respond to comments and questions. Here is a start - but alas I'm only skimming the surface of some of the issues raised.

t.theodorus ibrahim asks: "what are the current prospects of a merger or synergy between transhumanist concerns of the rights of natural humans within a world dominated by greater intelligences and the existing unequal power relations between animals - especially between human and non-human animals? Will the human reich give way to a transhuman reich?!"

The worry that superintelligent posthumans might treat natural humans in ways analogous to how humans treat non-human animals is ultimately (I'd argue) misplaced. But reflection on such a disturbing possibility might (conceivably) encourage humans to behave less badly to our fellow creatures. Parallels with the Third Reich are best used sparingly - the term "Nazi" is too often flung around in overheated rhetoric. Yet when a Jewish Nobel laureate (Isaac Bashevis Singer, The Letter Writer) writes: "In relation to them, all people are Nazis; for the animals it is an eternal Treblinka", he is not using his words lightly. whether we're talking about vivisection to satisfy scientific curiosity or medical research, or industrialized mass killing of "inferior" beings, the parallels are real - though like all analogies when pressed to far, they eventually break down.

One reason I'm optimistic some kind of "paradise engineering" is more likely than dystopian scenarios is that the god-like technical powers of our descendants will most likely be matched by a god-like capacity for empathetic understanding - a far richer and deeper understanding than our own faltering efforts to understand the minds of sentient beings different from "us". Posthuman ethics will presumably be more sophisticated than human ethics - and certainly less anthropocentric. Also, it's worth noting that a commitment to the well-being of all-sentience is enshrined in the Transhumanist/H+ Declaration -- a useful reminder of what (I hope) unites us.

Spaceweaver, if I understand correctly, argues that perpetual happiness would lead to stasis. Lifelong bliss would bring the evolutionary development of life to a premature end: discontent is the motor of progress.

I'd agree this sort of scenario can't be excluded. Perhaps we'll get stuck in a sub-optimal rut: some kind of fool's paradise or Brave New World. But IMO there are grounds for believing that our immediate descendants (and perhaps our elderly selves?) will be both happier and better motivated. Enhancing dopamine function, for instance, boosts exploratory behaviour, making personal stasis less likely. By contrast, it's depressives who get "stuck in a rut". Moreover if we recalibrate the hedonic treadmill rather than dismantle it altogether, then we can retain the functional analogues of discontent minus its nasty "raw feels". See "Happiness, Hypermotivation and the Meaning Of Life."

FrF asks: "Could it be that transhumanists who are, for whatever reason, dissatisfied with their current lives overcompensate for their suffering by proposing grand schemes that could potentially bulldoze over all other people?"

For all our good intentions, yes. The history of utopian thought is littered with good ideas that went horribly wrong. However, technologies to retard and eventually abolish ageing, or tools to amplify intelligence, are potentially empowering - liberating, not "bulldozing". Transhumanists/H+-ers aren't trying to force anyone to be eternally youthful, hyperintelligent or even - God forbid - perpetually happy.

Genuine dilemmas do lie ahead: for instance, is a commitment to abolishing aging really consistent with a libertarian commitment to unlimited procreative freedom? But I think one reason many people are suspicious of future radical mood-enhancement technologies, for instance, is the curious fear that someone, somewhere is going to force them to be happy against their will.

It's also worth stressing that radically enriching hedonic tone is consistent with preserving most of your existing preference architecture intact: indeed in one sense, raising our "hedonic set-point" is a radically conservative strategy to improve our quality of life. Uniform bliss would be the truly revolutionary option: and uniform lifelong bliss probably isn't sociologically viable for an entire civilisation in the foreseeable future. However, stressing a future of hedonic gradients and the possibility of preference architecture conservation is arguably to underplay the cognitive significance of the hedonic transition (IMO) in prospect. Just as the world of the clinically depressed today is unimaginably different from the world of the happy, so I reckon the world of our superhappy successors will be unimaginably more wonderful than contemporary life at its best. In what ways? Well, even if I knew, I wouldn't have the conceptual equipment to say.

Visigoth rightly points out that any long-acting oxytocin-therapy [or something similar] would leave us vulnerable to being "suckered". Could genetically non-identical organisms ever wholly trust each other - or ever be wholly trustworthy - without compromising the inclusive fitness of their genes? Maybe not. I discuss the nature of selection pressure in a hypothetical post-Darwinian world here: "The Reproductive Revolution: Selection Pressure in a Post-Darwinian World". Needless to say, any conclusions drawn are only tentative.

Away from genetics, it's worth asking what does it mean to be "suckered" when life no longer resembles a zero-sum game and pleasure is no longer a finite resource - a world where everyone has maximal control over their own reward circuity. Right now this scenario sounds fantastical; but IMO the prospect is neither as fantastical (nor as remote) as it sounds. See the Affective Neuroscience and Biopsychology Lab for how close we're coming to discovering the anatomical location and molecular signature of pure bliss.

More tomorrow....

David Pearce
dave@hedweb.com
http://www.hedweb.com/

April 27, 2009

What is a person?

A number of Sentient Developments readers have asked what I mean when I refer to non-human persons and the personhood spectrum. It's a fair question, and to be honest, I have yet to see a satisfying personhood taxonomy with an attendant list of traits that fully circumscribe the personhood continuum. I consider this an incredibly important issue as we move into a 'transhuman condition' and as we work to give non-human animals greater moral consideration. If I ever go back to school I think this will be a likely topic for a thesis.

A big question I would like to answer is, should personhood status be described as a spectrum or as a definitive, fixed state. In other words, are dolphins and bonobos as much persons as a genetically modified and cyborgized transhuman? And is such a distinction even necessary? Should persons, regardless of where they are situated in the personhood spectrum, all have the same moral and legal considerations? More philosophically, given the space of all possible minds, how can we begin to identify the space of all possible persons within that gigantic spectrum?

As for defining and circumscribing personhood, a number of thinkers have tried to give it a shot. First out the gates was Joseph Fletcher, an Episcopalian theologian and bioethicist, who argued for a list of fifteen “positive propositions” of personhood. These attributes are:
  • minimum intelligence
  • self-awareness
  • self-control
  • a sense of time
  • a sense of futurity
  • a sense of the past
  • the capability of relating to others
  • concern for others
  • communication
  • control of existence
  • curiosity
  • change and changeability
  • balance of rationality and feeling
  • idiosyncrasy
  • neocortical functioning
Many of Fletcher's traits are fairly subjective, open to argument (e.g. how do you measure intelligence, and how intelligent is intelligent enough?) and difficult to test scientifically (at least by today's standards). But what's interesting about this list is that not all human beings qualify as persons, and not all persons qualify as human. Moreover, individuals, at one time or another, are not persons. Fletcher argued that some severely developmentally challenged humans were not persons, and that chimeras and cyborgs might someday qualify as persons (what he called "parahumans").

Further, as Linda MacDonald Glenn noted in her paper, "When Pigs Fly? Legal and Ethical Issues in Transgenics and the Creation of Chimeras," Fletcher's list is more of a continuum (which is not necessarily a problem -- an idea I'm rather partial to) than a description of a definitive and fixed state -- the advantage being that it would serve as a better model for application to legal theory and practice.

Looking to the future, and as we move forward with NBIC technologies, we run the risk of denying essential basic liberties to intelligent and sentient beings should we fail to better elucidate what it means to be a person (whether they be non-human animals or artificially intelligent agents). As Glenn notes, we need to be prepared to ask, "How can we preserve our human rights and dignity despite the fact that our 'humanness' may no longer be the exclusive possession of Homo sapiens?"

Thankfully there appears to be a trend in favor of widening the circle of moral consideration to some non-human animals. We obviously have laws against animal abuse, some animal experimentation, and unacceptably constrained levels of confinement. More significantly, however, a number of countries are looking to see highly sapient and emotional non-human animals like the great apes be given proper personhood status along with all the attendant legal protections.

Ultimately, what a lot of people need to realize is that their status as persons will not be diminished should "lesser" animals be granted personhood status. This is a common concern -- that it would be undignified for humans to have to recognize the presence of other persons who are not human.

There are two things I'll say to that: First, it's our humaneness and sense of social justice that's important -- not that we're "human," and second, as we work to develop greater-than-human artificial intelligence, we are poised to lose our exalted status as the the most "highly evolved" creature on the planet. We better position our laws and social mechanisms in such a way that all persons will be protected when the time comes (the caveat being that we'll actually have a say in the matter once we hit that Singularity point).

Support the Great Ape Project.

TED: P.W. Singer: Military robots and the future of war


For his TED talk, military robotics expert P.W. Singer shows how the widespread use of robots in war is changing the realities of combat. His talk is alarming and sobering -- but it needs to be said. In addition to this video, I suggest you read the article, "Towards a largely robotic battlefield."

Singer's bio:
Peter Warren Singer is the director of the 21st Century Defense Initiative at the Brookings Institution -- where his research and analysis offer an eye-opening take on what the 21st century holds for war and foreign policy. His latest book, Wired for War, examines how the U.S. military has been, in the words of a recent US Navy recruiting ad, "working hard to get soldiers off the front lines" and replacing humans with machines for bombing, flying and spying. He asks big questions: What will the rise of war machines mean to traditional notions of the battlefield, like honor? His 2003 book Corporate Warriors was a prescient look at private military forces. It's essential reading for anyone curious about what went on to happen in Iraq involving these quasi-armies.

Singer is a prolific writer and essayist (for Brookings, for newspapers, and for Wired.com’s great Threat Level), and is expert at linking popular culture with hard news on what’s coming next from the military-industrial complex. Recommended: his recent piece for Brookings called "A Look at the Pentagon's Five-Step Plan for Making Iron Man Real."
Via Theoretical Transhumanism.

The Abolitionist Project: Using biotechnology to abolish suffering in all sentient life

David Pearce is guest blogging this week

First, many thanks to George for inviting me to blog on Sentient Developments. I asked George what I should blog about. He suggested I might start with The Hedonistic Imperative. This topic might be more interesting to readers of Sentient Developments if I respond to critical questions or blog on themes readers feel I've unjustly neglected. If so, please let me know.

Briefly, some background. In 1995 I wrote an online manifesto which advocates the use of biotechnology to abolish suffering in all sentient life. The Hedonistic Imperative predicts that world's last unpleasant experience will be a precisely dateable event in the next thousand years or so - probably a "minor" pain in some obscure marine invertebrate. More speculatively, HI predicts that our descendants will be animated by genetically preprogrammed gradients of intelligent bliss - modes of well-being orders of magnitude richer than today's peak experiences.

I write from the perspective of what is uninspiringly known as negative utilitarianism i.e. I'd argue that we have an overriding moral responsibility to abolish suffering. If my background had been a bit different, I'd probably just call myself a scientifically-minded Buddhist. True, Gautama Buddha didn't speak about biotechnology; but to Buddhists (and Jains) talk of engineering the well-being of all sentient life is less likely to invite an incredulous stare than it does in the West.

I should also add that credit for the first published scientifically literate blueprint for a world without suffering belongs IMO to Lewis Mancini. See "Riley-Day Syndrome, Brain Stimulation and the Genetic Engineering of a World Without Pain" Medical Hypotheses (1990) 31. 201-207. As far as I can tell, Mancini's original paper sank with barely a trace. However, it is now online where it belongs: I've uploaded the text here: http://www.wireheading.com/painless.html.
[I confess my jaw dropped a couple of years ago when I stumbled across it.]

HI was originally written for an audience of analytic philosophers. The Abolitionist Project (2007) http://www.abolitionist.com/ and Superhappiness (2008) http://www.superhappiness.com/ are (I hope) more readable and up-to-date. I won't now go into the technical reasons for believing we can use biotech, robotics and nanotechnology to eradicate the molecular substrates of suffering and malaise from the biosphere. Given the exponential growth of computing power and biotechnology, the abolitionist project could in theory be completed in two or three centuries or less. This timescale is unlikely for sociological reasons. So why should anyone think it's ever going to happen? All sorts of stuff is technically feasible in principle; but a lot of so-called futurology is just a mixture of disguised autobiography and wish-fulfillment fantasy. Is this any different?

Quite possibly not; but here are two reasons for guarded optimism.

Futurists spend a lot of time discussing the possibility of posthuman superintelligence. Whatever else superintelligence may be, we implicitly assume that it must at least weakly be related to what IQ tests measure - just completely off the scale. However, IQ tests ignore one important and extraordinarily cognitively demanding skill that non-autistic humans possess. At least part of what drove the evolution of our uniquely human intelligence was our superior "mind-reading" skills and enhanced capacity for empathetic understanding of other intentional systems. This capacity is biased, selective, and deeply flawed; but I'd argue its extension and enrichment are going to play a critical role in the development of intelligent life in the universe. By contrast, conventional IQ tests are "mind-blind"; they simply ignore social cognition. I'd argue that our posthuman descendants will have a vastly richer capacity to understand the perspective of "what it is like to be" other sentient beings; and this recursively self-improving empathetic capacity will be a vital ingredient of mature superintelligence and posthuman ethics. Of course "super-empathy" doesn't by itself guarantee a utopian outcome. And I'm personally sceptical that digital computers with a classical von Neumann architecture will ever be sentient, let alone superintelligent. But a future (hypothetical) superhuman capacity for empathetic understanding does, I think, make a universal compassion for all sentient beings more likely.

Viewing the way we currently treat other sentient beings as a cognitive and not just a moral limitation is of course controversial. So secondly, let's fall back on a more cynical and conservative assumption. Assume, pessimistically, that what Bentham says of humans will be true of posthumans too: "Dream not that men will move their little finger to serve you, unless their advantage in so doing be obvious to them. Men never did so, and never will, while human nature is made of its present materials." Does this bleak analysis of (post)human nature rule out a world that supports the well-being of all sentience?

No, I don't think so. If it's broadly correct, this limitation does mean is that morally serious actors today should strive to develop advanced technology that makes the expression of (weak) benevolence towards other sentient beings trivially easy - so easy that its expression involves less effort on the part of the morally apathetic than raising one's little finger. For example, whereas one way to combat the cruelty of factory farming is to use moral arguments to promote its abolition - as in their very different ways do PETA and Peter Singer - the other, complementary strategy is to promote technologies that will allow "us" all to lead a cruelty-free lifestyle at no personal cost. Thus see the nonprofit research organization New Harvest: advancing meat substitutes: http://www.new-harvest.org/.

Thirty years hence, if meat-eaters are presented with two equally tasty products, one "natural" from an intensively-reared factory-farmed animal that's been butchered for its flesh as now, the other labelled "cruelty-free" in the form of attractively branded vatfood, how many consumers are deliberately going to choose the cruel option if it doesn't taste better? I'm aware that this kind of optimism can sound naive. Yes, we can all be selfish; but i think relatively few people are malicious, and still fewer people are consistently malicious. So long as the slightest personal inconvenience to members of the master species can be avoided, I think we can extend the parallel of developing cruelty-free cultured meat to the eradication of suffering throughout the living world: ecosystem redesign, depot-contraception, rewriting the vertebrate genome, the lot. With sufficiently advanced technology, the creation of a living world without cruelty needn't be effortful or burdensome to the morally indifferent. Technology can make what is today impossibly difficult soon merely challenging, then relatively easy, and eventually trivial. And of course a lot of people do aspire to be more than merely weakly benevolent. Maybe we're "really" just signalling to potential mates our desirability as nurturing fathers [or whatever story evolutionary psychology tells us explains our altruistic desires.]. But what matters is not our motivation or its ultimate cause, but the outcome.

A cruelty-free world is one thing; but many of us feel ambivalent about extreme happiness, let alone lifelong superhappiness of the kind promised by utopian neurobiology. One reason we may feel ambivalent is that we contemplate, for instance, the selfishness and drug-addled wits of the heroin addict; or the crazed lever-pressing of the rodent wirehead; or the impaired judgement of the euphorically manic. Intellectuals especially may be resistant to prospect of superhappiness, fearing that their intellectual acuity may be compromised. Beyond a certain point, must there be some kind of tradeoff between hedonic tone and intellectual performance?

Not necessarily. Here is just one way in which reprogramming our reward circuitry could actually serve as a tool for intelligence-amplification and cognitive enhancement. Recall Edison's much-quoted dictum: “Genius is one percent inspiration and ninety-nine percent perspiration.” The relative percentages are disputable; but the contribution of sheer hard work and intellectual focus to productivity isn't in doubt. Now if you're a student, an academic or an intellectual, imagine if you could selectively amplify the subjective reward you derive from all and only the cerebral activities that you think you ought to enjoy doing most; and conversely, imagine if you could diminish or switch off altogether the reward from life's baser pleasures. What might you achieve intellectually if you could reprogram your reward circuitry so that you could work pursuing your highest aspirations for 14 hours a day? By way of contrast, using the Internet offers an uncomfortable insight into what one is really interested in. [Sadly, I lose track of the endless hours I've wasted online viewing complete fluff. I tell myself that I'm soon going to enjoy writing a 500 page scholarly tome, The Abolitionist Project. Alas in practice it's more fun surfing the Net for trivia.] In any event, IMO the enemy of intelligence isn't bliss but indiscriminate, uniform bliss; and in the future I think superhappiness and superintelligence can be fused - seamlessly or otherwise.

Are there pitfalls here? Yes, lots. But they are technical problems with a technical solution.

Here's another example. one reason we may be ambivalent about extreme happiness is that we see how it can make people antisocial. One thinks of the heroin addict who neglects his family for the sake of his opioid habit. But what if safe, sustainable designer drugs or gene therapies were available that conferred an unlimited capacity for altruistic pleasure? It's only recently been discovered that the empathogenic hugdrug MDMA (Ecstasy) http://www.mdma.net/ triggers copious release of the "trust hormone" oxytocin: oxytocin seems to be the missing jigsaw piece in explaining MDMA's unique spectrum of action. So to take one scenario, what if mass oxytocin-therapy enabled us to be chronically kind, trusting and empathetic towards each other - the very opposite of "selfish hedonism" of popular stereotype.

Moreover this option isn't just a matter of personal lifestyle choice; I think the implications are more far-reaching. Thoughtful researchers are increasingly concerned about existential and global catastrophic risks in an era of biowarfare, nanotechnology and weapons of mass destruction. Britain's Astronomer Royal, Sir Martin Rees, puts the odds of human extinction this century at 50%. I suspect this figure is too high, but clearly the risk is not negligible. Anyhow, arguably the greatest underlying source of existential and global catastrophic lies in the Y chromosome: testosterone-driven males are responsible for the overwhelming bulk of the world's wars, aggression and reckless behaviour. Decommissioning the Y chromosome isn't currently an option; but the potential civilizing influence of pro-social drugs and gene therapies on dominant alpha males shouldn't be lightly dismissed as a risk-reduction strategy. In general, a world where intelligent agents are happier, trusting and more trustworthy is potentially a much safer world - and much more civilised too.

Are there pitfalls to modifying human nature? Again yes, lots. But there are also profound risks in retaining the biological status quo.

David Pearce
dave@hedweb.com
http://www.hedweb.com/

April 26, 2009

Most epidemics originate from livestock

It's too early to call the Swine Flu an epidemic, but I'd like to take this opportunity to remind readers that most epidemics throughout human history have originated from domesticated animals.

Measles, smallpox and tuberculosis came to humans from cattle, and the flu originated from pigs and ducks (including the avian flu). Whooping cough (pertussis), which causes 600,000 deaths per year worldwide, comes from pigs and dogs.

Interestingly, it doesn't appear that SARS originated from livestock, but instead from civet cats; that said, the first diagnosed patient was a farmer from Guangdong Province, China.

So not only would the elimination of livestock work to reduce climate change, it would dramatically reduce the chances of diseases being transmitted to human populations.

New feature: Yesterday's Tomorrow

I'm going to implement a new feature here on Sentient Developments called "Yesterday's Tomorrow." I have a great affinity for historical futurist visions and aesthetics. Look for me to regularly post photos and videos that capture futurist sensibilities of our past.

Image discovered on Posthuman Blues.

Economist: Safe without the bomb?

The April 11-17 edition of The Economist asks the question: can the world be safe without the bomb? A nuclear-free world may never come about, they argue, but there can be safety in trying:

Nuclear weapons cannot simply be wished away or uninvented. The technology is over 60 years old and the materials and skills needed are widely spread. Still, by infusing his idealism with a dose of realism Mr Obama can do more to create a safer world than simple “Ban the bomb” slogans ever could.

For zero nukes would make no sense if this left the world safe for the sorts of mass conventional warfare that consumed the first half of the 20th century. How many bombs would be needed to prevent that? And with what co-operation and controls to keep these remaining weapons from use? It is hard to say what sort of nuclear future would be more stable and peaceful until you get a lot closer to zero. Happily, the difficult steps needed to get safely to low numbers would all be needed for zero too. Mr Obama’s vision is helpful if it gets people thinking about imaginative ways forward.

Mr Obama is already committed to using the goal of zero to shape his future nuclear plans. Both America and Russia still have far more nuclear warheads than either wants. Even George Bush, no dewy-eyed disarmer, negotiated cuts down to 1,700-2,200 apiece by 2012 (from the 6,000 agreed upon after the cold war had ended) and was ready to go lower. Encouragingly, Mr Obama and his Russian counterpart, Dmitry Medvedev, have agreed that a modest cut will accompany new weapons-counting rules to be fixed by the end of the year, with more ambitious reductions to follow. All the official five except China have been trimming their arsenals too.

Hmm, now where have I heard this before? Oh, yeah -- right here on this blog.

Read the entire article.

John Maddox: The Skeptical Prophet [obit]

John Maddox, the former editor of Nature, died last week. Maddox was a controversial figure for his views of planetary resiliency; he frequently butted heads with environmentalists and global warming alarmists, and had been doing so since 1972.

He debunked the catastrophists, most notably in his book, “The Doomsday Syndrome,” in which he argued that the Earth had more carrying capacity and ecological resilience than environmentalists realized. His book was denounced at the time by John P. Holdren, who is today the White House science advisor.

Maddox did not dispute that carbon dioxide emissions could drive global warming, but said: "The IPCC [Intergovernmental Panel on Climate Change] is monolithic and complacent, and it is conceivable that they are exaggerating the speed of change."

Check out his obituary in the NYT.

April 25, 2009

Special glasses to prevent eye-contact with gorillas at zoos [FAIL]

The Rotterdam Zoo is providing cardboard glasses to make it appear that visitors are looking off to one side. These are meant to prevent incidents in which gorillas attack visitors for making eye contact. The introduction of these glasses follows an attack on a woman by an escaped gorilla and are sponsored by a local health-insurance company.

This is just insanity. Gorillas simply do not belong in zoos where people get to gawk at them day after day. It's time for some serious animal welfare reform and the removal of all non-human animals from zoos who fall within the personhood spectrum.

April 24, 2009

The link between autism and extraordinary ability

Evidence is growing to support the suggestion that there is a link between genius and autism [duh, just hang out at any transhumanist conference for a taste of this]. This week's Economist takes a look into how that link might work and whether neurotypicals can benefit from the knowledge.

The read-between-the-lines suggestion here is that neurotypicals might be able to engage in cognitive enhancement by working to emulate the autistic brain.

Oh, but wait now, isn't autism and Asperger's supposed to be an 'affliction' and a 'blight'? Hmmm, sounds like an acute case of autism envy...

From the Economist:
That genius is unusual goes without saying. But is it so unusual that it requires the brains of those that possess it to be unusual in others ways, too? A link between artistic genius on the one hand and schizophrenia and manic-depression on the other, is widely debated. However another link, between savant syndrome and autism, is well established. It is, for example, the subject of films such as “Rain Man”...

...A study published this week by Patricia Howlin of King’s College, London, reinforces this point. It suggests that as many as 30% of autistic people have some sort of savant-like capability in areas such as calculation or music. Moreover, it is widely acknowledged that some of the symptoms associated with autism, including poor communication skills and an obsession with detail, are also exhibited by many creative types, particularly in the fields of science, engineering, music, drawing and painting. Indeed, there is now a cottage industry in re-interpreting the lives of geniuses in the context of suggestions that they might belong, or have belonged, on the “autistic spectrum”, as the range of syndromes that include autistic symptoms is now dubbed.

So what is the link? And can an understanding of it be used to release flashes of genius in those whose brains are, in the delightfully condescending term used by researchers in the area, “neurotypical”? Those were the questions addressed by papers (one of them Dr Howlin’s) published this week in the Philosophical Transactions of the Royal Society. The society, Britain’s premier scientific club and the oldest scientific body in the world, produces such transactions from time to time, to allow investigators in particular fields to chew over the state of the art. The latest edition is the outcome of a conference held jointly with the British Academy (a similar, though younger, organisation for the humanities and social sciences) last September.
Read the entire article.

Philosopher David Pearce guest blogging next week

British philosopher David Pearce will be guest blogging on Sentient Developments next week.

Pearce is a seminal figure in the transhumanist movement, most notably for his work as an organizer (he co-founded the World Transhumanist Association (now Humanity+) with Nick Bostrom in 1998) and as a thought leader (particularly for his work as a negative utilitarian ethicist and abolitionist).

Pearce's contributions will fit in nicely here at SentDev. Like myself, he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. He has a book-length internet manifesto called The Hedonistic Imperative in which he details how he believes the abolition of suffering can be accomplished through "paradise engineering".

A transhumanist and a vegan, Pearce also calls for the elimination of cruelty to animals. Among his websites, there are many devoted to the plight of animals. [Love it when people practice what they preach]

In The Hedonistic Imperative, Pearce outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience in human life and produce a posthuman civilization.

Pearce is also currently the director of BLTC Research, a non-profit research organization that seeks to elucidate the underlying physiological mechanisms of physical and mental suffering, with the intention of eradicating it in all its forms. The goals of research in Better Living Through Chemistry include determining the final common neurological pathway of both pleasure and pain in the brain. Once this process is better understood, it could be possible to more effectively design medicines and other treatments for various mental illnesses, as well as cure the painful symptoms of many diseases.

David will be blogging on Sentient Developments from April 27 through to May 1. Note: David mentioned that he'd like to address topic requests from Sentient Developments readers. Feel free to post article suggestions and questions for David in this post's comments section.

[Hmmm, just realized that my last three guest bloggers have all been named David (Eagleman and Brin being the previous two). Weird]

April 19, 2009

John Maynard Smith on the future of human life [video]


Evolutionary biologist John Maynard Smith, who died on this day in 2004, lectures on human communication and how information technologies will lead to the next revolution in human evolution. In this video, Maynard Smith describes novel means of communication, non DNA replication and the rise of post-biological life with greater-than-human capacities.

Fifth anniversary of the death of John Maynard Smith

Evolutionary biologist, game theorist and proto-transhumanist, John Maynard Smith died on this day in 2004 at the age of 84. Born in London and known as "JMS" to his friends, Maynard Smith is most notably remembered for his work in biology, and most particularly for his work introducing game theory to evolutionary biology.

He is also remembered for openly advocating the reengineering of humans, particularly making alterations to the genome and for speculating about the future of intelligent life on Earth.

Seminal and influential work

Maynard Smith has the distinction of being the first biologist to introduce mathematical models from game theory into the study of behavior. He was greatly influenced by John von Neumann and John Nash, and in turn introduced the Nash Equilibrium to biology.

In his book Evolution and the Theory of Games, he showed that that the success of an organism's actions often depends on what other organisms do. In a field dominated by evolutionary biologists who tend to look exclusively for competitive relationships in Darwinian processes, his ideas were a breath of fresh air, inspiring such biologists and thinkers as Richard Dawkins and Robert Wright and offering methodologies that are still making their way into research labs around the world.

Maynard Smith was also concerned with the predominance of sexual reproduction. According to his models, he surmised that asexual reproduction should be more advantageous from a selectional perspective.

In his 1978 book The Evolution of Sex, Maynard Smith pointed out "the twofold cost of sex." Sexually reproducing organisms, he argued, must produce both female and male offspring, whereas asexual organisms only need to produce females. In most sexual populations, half of the offspring are male, but in asexual populations there are twice as many females.

This advantage, claimed Maynard Smith, should provide a huge evolutionary advantage to asexual reproduction. The problem, he asked, is why we see so much sex in the world. We still don't have a satisfactory answer.

Maynard Smith was also deeply committed to making evolutionary ideas accessible to a wide audience. His book, The Theory of Evolution, inspired many of today's leading researchers to become biologists.

Forward thinker

Maynard Smith was greatly influenced by another important scientist, JBS Haldane, the controversial transhumanist biologist and philosopher.

While a student at Eton College, Maynard Smith became alienated by what he felt was an anti-intellectual, snobbish and arrogant atmosphere. His professors hated Haldane, and frequently complained about his socialist, Marxist and atheist leanings.

Maynard Smith remembered thinking, "Anybody they hate so much can't be all bad. I must go and find out about him." He read Haldane's Possible Worlds and in turn sought him out. Haldane went on to become his primary mentor, claiming afterwards that he taught him everything he knew. "I wept when he died," said Maynard Smith.

Like Haldane, Maynard Smith had a progressive leftist political worldview and looked to technology and the medical sciences as a means for improving the human condition.

He was for a time a member of the communist party but disgustedly left in 1956.

He died of lung cancer sitting in a high-backed chair, surrounded by books at his home in Lewes, East Sussex on April 19, 2004, 122 years to the day after the death of Darwin.

April 16, 2009

Using holographic tools to build worlds [video]


"World Builder" is an award willing short created by Bruce Banit, filmmaker and co-creator of "405". The video features a man who uses advanced holographic tools to build a world for the woman he loves. The short was filmed in a single day, but the post-production took 2 years.

Performance artist Stelarc profiled in the NYT

The New York Times has published an article about one of my favorite people on the planet, the Australian performance artist Stelarc. His art focuses heavily on futurism and extending the capabilities of the human body, and as such, most of his pieces are centered around his concept that the human body is obsolete. From the article:

The body, however, has its ways of fighting back against the 5-year, or 500-year plans of its owners. Geneticists report that our own genes are still evolving, to what end, no one can guess. No supercomputer can yet predict from simply reading a sequence of A’s, C’s, T’s and G’s that make up a genetic code what creature will emerge.

The progression to postnatural history may be a painful birth if the experience of Stelarc, 62, who splits his time between Brunel University in West London and the University of Western Sydney in Australia, is any example. The body, he says, is obsolete and needs to map its “post-evolutionary strategies.”

To that end, Stelarc has outfitted himself at times with an extra hand (nonsurgically), swallowed a camera that would explore the sculpture of his stomach and hung himself in the air on hooks. For a show called “Fractal Flesh,” he wired half his body, in Luxembourg, up to muscle stimulation equipment that could be controlled by computers in Paris, Helsinki and Amsterdam. The result, he told an interviewer later, “was a split body experience.”

The ear on his arm, he said, is a work in progress that has required a couple of surgeries so far. It took him 12 years to find the doctors and the financing, which was provided by the Discovery Channel as part of a series in experimental surgery, to do the work.
Read the entire NYT article.

As an aside, I had the great pleasure of meeting Stelarc when he keynoted at TransVision 2004. Unlike the anti-social Steve Mann (who also gave a keynote presentation), he stayed and mingled with the conference attendees for the entire weekend -- he's a very warm and approachable guy. He was even in attendance for my talk on working the conscious canvas, which was a great honour (yes, I'm a fanboy).

Do you hate this ad?

PETA ran this ad campaign several years ago, and needless to say it was met with extreme negative reaction. You can read more about PETA and the use of Holocaust imagery over at Wikipedia.

For the record, I made a similar analogy a number of years ago.

So, what do you think? Did PETA go too far, or is it fair to compare?

Peter Singer: To defame religion is a human right

The UN human rights council recently adopted a resolution condemning "defamation of religion" as a human rights violation. As the text of the resolution reads, "Defamation of religion is a serious affront to human dignity" that leads to "a restriction on the freedom of [religions'] adherents."

As a supporter of the UN this came as both a shock and a disappointment. It's a step in the wrong direction as we work to protect and improve the integrity of the world's cultural health. It's also a punch in the face for freedom of speech advocates.

But leave it to Princeton's Peter Singer to tell it like it really is. In a recent article for the Guardian, Singer argues that defaming religion is hardly a human rights violation. On the contrary, says singer -- it's actually a human right. We must defend the right to cause offense to believers, argues Singer, but only when it's not meant to stir up hatred.

Sounds reasonable to me.

More.

April 14, 2009

Welcome to the Machine, Part 4: Kurzweil's nano neural nets

Previously in series: The Ethics of Simulated Beings, Descartes's Malicious Demon and The Simulation Argument.

As previously noted in this series, our entire world may be simulated. For all we know we're sitting on a powerful supercomputer somewhere, the mere playthings of posthuman intelligences.

But this is not the only possibility. There's another way that this kind of fully immersive 'reality' could be realized -- one that doesn't require the simulation of an entire world. Indeed, it's quite possible that your life is not what it seems -- that what you think of as reality is actually an illusion of the senses. You could be experiencing a completely immersive and totally convincing virtual reality right now and you don't even know it.

How could such a thing be possible? Nanotechnology, of course.

The nano neural net

In his book, The Singularity is Near, futurist Ray Kurzweil describes how a nanotechnology powered neural network could give rise to the ultimate virtual reality experience. By suffusing the brain with specialized nanobots, he speculates that we will someday be able to override reality and replace it with an experience that's completely fabricated. And all without the use of a single brain jack.

Here's how:

First, we have to remember that all sensory data we experience is converted into electrical signals that the brain can process. The brain does a very good job of this, and we in turn experience these inputs as subjective awareness (namely through consciousness and feelings of qualia); our perception of reality is therefore nothing more than the brain's interpretation of incoming sensory information.

Now imagine that you could stop this sensory data at the conversion point and replace it with something else.

That's where the nano neural net comes in. According to Kurzweil, nanbots would park themselves near every interneuronal connection coming in from our senses (sight, hearing, touch, balance, etc.). They would then work to 1) halt the incoming sensory signals (not difficult -- we already know how to use "neuron transistors" that can detect and suppress neuronal firing) and 2) replace these inputs with the signals required to support a believable virtual reality environment (a bit more challenging).

As Kurzweil notes, "The brain does not experience the body directly." As far as the conscious self is concerned, the sensory data would completely override the feelings generated by the real environment. The brain would experience the synthetic signals just as it would the real ones.

Generating synthetic experiences

Clearly, the second step -- generating new sensory signals -- is radically more complicated than the first (not to mention, of course, the difficulty of creating nanobots that can actually work within the brain itself!). Creating and transmitting credible artificial sensory data will be no easy feat. We will need to completely reverse engineer the brain so that we can map all requisite sensory interactions. We'll also need a fairly sophisticated AI to generate the stream of sensory data that's needed to create a succession of believable life experiences.

But assuming we can get a nano neural net to work, the sky's the limit in terms of how we could use it. Kurzweil notes,
You could decide to cause your muscles and limbs to move as you normally would, but the nanobots would intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move, appropriately adjusting your vestibular system and providing the appropriate movement and reorientation in the virtual environment.
From there we will create virtual reality experiences as real or surreal as our imaginations allow. We'll be able to choose different bodies, reside in all sorts of environments and interact with our fellow neural netters. It'll be an entirely new realm of existence. This new world, with all its richness and possibility, may eventually supplant our very own.

And in some cases, we may even wish to suppress and alter our memories such that we won't know who we really are and that we're actually living in a VR environment...

A topic I will explore in more detail in my next post.

Humanist Canada debate: The Evolution of Ethics

I will be debating social conservative Michael Coren this coming Saturday April 18. Hope to see you there. Tickets are still available. Event description:
Are ethics divinely-inspired or man-made? Are there absolute morals? Although scientists and philosophers have debated the nature of ethics for hundreds of years, developments in genetic research have unleashed a firestorm of issues concerning human control of creation and its impact on our future. Given that society ideals change over time, how can we determine what is morally right or wrong?

Join Humanist Canada for a lively and thought-provoking debate on the nature of ethics. Five prominent speakers, from both the Christian and Humanist communities, will discuss and debate some of the hottest topics today including abortion, gender, homosexuality, and biotechnology. Our panel of speakers includes: Christopher diCarlo (celebrated professor of Philosophy of Science and Bioethics; founder of "We Are All African" Campaign); Michael Coren (outspoken Christian writer; radio and TV host); George Dvorsky (popular transhumanist; animal rights activist); Tony Costa (recognized public speaker for Campus for Christ); and Jean Saindon (award-winning professor of Natural Science and Technology). Speaker profiles below.

Tickets: $15 Humanist Canada members; $25 general admission; $10 students (with school ID). Appetizers, desserts and drinks included. Purchase tickets by April 15 for best seats. Click HERE for the printable registration form or click on ticket choices below.

April 10, 2009

Welcome to the Machine, Part 3: The Simulation Argument

Previously in series: The Ethics of Simulated Beings and Descartes's Malicious Demon.

No longer relegated to the domain of science fiction or the ravings of street corner lunatics, the "simulation argument" has increasingly become a serious theory amongst academics, one that has been best articulated by philosopher Nick Bostrom.

In his seminal paper "Are You Living in a Computer Simulation?" Bostrom applies the assumption of substrate-independence, the idea that mental states can reside on multiple types of physical substrates, including the digital realm. He speculates that a computer running a suitable program could in fact be conscious. He also argues that future civilizations will very likely be able to pull off this trick and that many of the technologies required to do so have already been shown to be compatible with known physical laws and engineering constraints.

Harnessing computational power

Similar to futurists Ray Kurzweil and Vernor Vinge, Bostrom believes that enormous amounts of computing power will be available in the future. Moore's Law, which describes an eerily regular exponential increase in processing power, is showing no signs of waning, nor is it obvious that it ever will.

To build these kinds of simulations, a posthuman civilization would have to embark upon computational megaprojects. As Bostrom notes, determining an upper bound for computational power is difficult, but a number of thinkers have given it a shot. Eric Drexler has outlined a design for a system the size of a sugar cube that would perform 10^21 instructions per second. Robert Bradbury gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits – this would likely be done on a quantum computer or computers built of out of nuclear matter or plasma [check out this article and this article for more information].

More radically, John Barrow has demonstrated that, under a very strict set of cosmological conditions, indefinite information processing (pdf) can exist in an ever-expanding universe.

At any rate, this extreme level of computational power is astounding and it defies human comprehension. It’s like imagining a universe within a universe -- and that's precisely be how it may be used.

Worlds within worlds

"Let us suppose for a moment that these predictions are correct," writes Bostrom. "One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears." And because their computers would be so powerful, notes Bostrom, they could run many such simulations.

This observation, that there could be many simulations, led Bostrom to a fascinating conclusion. It's conceivable, he argues, that the vast majority of minds like ours do not belong to the original species but rather to people simulated by the advanced descendants of the original species. If this were the case, "we would be rational to think that we are likely among the simulated minds rather than among the original biological ones."

Moreover, there is also the possibility that simulated civilizations may become posthuman themselves. Bostrom writes,
They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration...we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.
Given this matrioshkan possibility, the number of "real" minds across all existence should be vastly outnumbered by simulated minds. The suggestion that we're not living in a simulation must therefore address the apparent gross improbabilities in question.

Again, all this presupposes, of course, that civilizations are capable of surviving to the point where it's possible to run simulations of forebears and that our descendants desire to do so. But as noted above, there doesn't seem to be any reason to preclude such a technological feat.

Next: Kurzweil's nano neural nets.

April 8, 2009

Seriously, how hard is it to launch a rocket?

All that fuss last week about North Korea launching a rocket and the thing ends up in the drink. It's almost laughable. We're talking about decades old technology, here -- how hard could it possibly be?

Well, the New York Times recently asked the experts -- and it's not, uh, rocket science. Er, wait -- actually it is...

According to the experts 1) it's harder than it looks and 2) despite the setback, we shouldn't underestimate the North Korean threat:

The Hard Part: Hitting a Target by Rand Simberg, a recovering aerospace engineer who blogs about space, politics and the future at Transterrestrial Musings. Simberg writes, "It is one thing to be able to send the upper stage of a missile a few thousand miles. It is another to guide it to hit a target."

‘Primitive,’ but Dangerous, Skills
by John Pike, the director of GlobalSecurity.org, a military information Web site. Pike writes:
There is a tendency to disparage the North Koreans (as well as Pakistanis, Iranians and Indians) as ignorant peons whose weapons skills are consistently derided as “primitive.” While this latest test will fuel the “ignorant peon” school, it should not.

North Korea’s low yield nuclear test in October 2006 was derided as a failure, because it did not replicate the multi-kiloton yield of America’s first nuclear test. It did, however, coincide with the sub-kiloton tests of the fission trigger for a hydrogen bomb. The “ignorant peon’” school tells us that North Korea’s “primitive” atomic bombs are too big to put on missiles. But possibly North Korea’s hydrogen bombs are easily fitted on missiles.

Be sure to read their entire responses.

Here's a video simulation of what the North Korean satellite launch was supposed to look like, along with possible paths and tracking stations:

Military robots will soon be able to fire on their own

The U.S. Defense Department is reportedly looking to develop autonomous armed robots that will eventually be able to find and destroy targets on their own. Instead of being controlled remotely, unmanned drones will have on-board computer programs that can decide whether they should fire their weapons.

"The trend is clear: Warfare will continue and autonomous robots will ultimately be deployed in its conduct," writes Ronald Arkin, a robotics expert at the Georgia Institute of Technology in Atlanta. "The pressure of an increasing battlefield tempo is forcing autonomy further and further toward the point of robots making that final, lethal decision," he predicted. "The time available to make the decision to shoot or not to shoot is becoming too short for remote humans to make intelligent informed decisions."

According to John Pike, an an expert on defense and intelligence matters, autonomous armed robotic systems probably will be operating by 2020.

But many fear that these robots will be unable to distinguish between legitimate targets and civilians in a war zone.

"We are sleepwalking into a brave new world where robots decide who, where and when to kill," said Noel Sharkey, an expert on robotics and artificial intelligence at the University of Sheffield, England.

More.