October 29, 2009

The Bright Side of Nuclear Armament

Casey Rae-Hunter is a guest blogger for Sentient Developments.

Today's e-edition of the always thought-provoking Foreign Policy magazine had the usual roundup of articles on America's dicey diplomacy with Iran and the Afghanistan Question, which at this point can be summarized by the famous Clash song. The news roundup also featured a couple of articles on nuclear proliferation, including a contrarian piece by John Mueller called "The Rise of Nuclear Alarmism: How We Learned to Start Worrying and Fear the Bomb — and Why We Don’t Have To." How could I resist a provocative title like that?

The article posits that history would've dumped us in more or less the same place with or without the Bomb. That statement alone is sure to ruffle some feathers. But Mueller's assertion that nuclear weapons didn't even serve as a deterrent is in direct defiance of conventional military-historic wisdom.
Nuclear weapons are, of course, routinely given credit for preventing or deterring a major war, especially during the Cold War. However, it is increasingly clear that the Soviet Union never had the slightest interest in engaging in any kind of conflict that would remotely resemble World War II, whether nuclear or not. Its agenda mainly stressed revolution, class rebellion, and civil war, conflict areas in which nuclear weapons are irrelevant.

Nor have possessors of the weapons ever really been able to find much military use for them in actual armed conflicts. They were of no help to the United States in Korea, Vietnam, or Iraq; to the Soviet Union in Afghanistan; to France in Algeria; to Britain in the Falklands; to Israel in Lebanon and Gaza; or to China in dealing with its once-impudent neighbor Vietnam.

In fact, a major reason so few technologically capable countries have actually sought to build the weapons, contrary to decades of hand-wringing prognostication, is that most have found them, on examination, to be a substantial and even ridiculous misdirection of funds, effort, and scientific talent.

It's certainly difficult to disagree with his last point, particularly when backed up by the sobering fact that, "during the Cold War alone, it has been calculated, the United States spent enough money on these useless weapons and their increasingly fancy delivery systems to have purchased somewhere between 55 and 100 percent of everything in the country except the land." Basically, that money could've bought cradle-to-grave health care for my uninsured countrymen several times over. Makes me proud to be an American.

Mueller also suggests that status quo thinking on nuclear armament has led to appalling strategic blunders, including the Iraq War:
For more than a decade, U.S. policy obsessed over the possibility that Saddam Hussein's pathetic and technologically dysfunctional regime in Iraq could in time obtain nuclear weapons (it took the more advanced Pakistan 28 years), which it might then suicidally lob, or threaten to lob, at somebody. To prevent this imagined and highly unlikely calamity, a war has been waged that has probably resulted in more deaths than were suffered at Hiroshima and Nagasaki combined.

And North Korea? Forget it. The payloads in their current weapons would, if detonated in the middle of New York's Central Park, "be unable to destroy buildings on its periphery," according to Mueller.

Though I lack the time and resources to independently verify his contentions, Mueller's central question is worth considering: What are we ultimately achieving with our nuclear fever-dream? Are we securing peace, or merely investing a long-dreaded boogeymen with an endless supply of neurosis? How big will we allow this beast to get? When will it fulfill those dark fantasies with which we keep it so well fed?

I'm not so sure that we should just kick back while Iran builds and tests a nuke. Nor should we stop using North Korea's weapons program as leverage in an international call for openness and reform. Nuclear blackmail is a two-way street. Where tyrants and theocrats seek to exploit their lust for the Bomb to create insecurity, America and its allies should be free to explore options that would limit the effect of such brinksmanship. Yet we would be well-advised to heed Mueller's message: by playing into nuclear paranoia, we add to the general shakiness of some already iffy geopolitical situations.

It may very well be the case that obtaining a nuclear weapon is, as Mueller claims, "substantially valueless," and a "very considerable waste of money and effort." Now who's gonna tell Ahmadinejad?

Casey Rae-Hunter is a writer, editor, musician, producer and self-proclaimed "lover of fine food and drink." He is the Communications Director of the Future of Music Coalition — a Washington, DC think tank that identifies, examines, interprets and translates issues at the intersection of music, law, technology and policy. He is also the founder and CEO of The Contrarian Media Group, which publishes The Contrarian and Autistic in the District — the latter a blog about Asperger's Syndrome.

October 26, 2009

Link dump: 2009.10.26

From the four corners of the web:
  • Practical Ethics: Will Down syndrome disappear?
    There are concerns about the impact of the improving accuracy and availability of low risk cheap prenatal tests such as for Down syndrome (DS). Introduction of a noninvasive maternal serum test is expected that might provide a definitive diagnosis of DS in the first trimester at no risk to the fetus. The authors report that the tests should be virtually universally available and allow privacy of decision making. The authors ask whether the new tests will decrease the birth incidence of DS even further. Indeed, might there be no more DS children born? If so, is that a problem?
  • Seven questions that keep physicists up at night [NS]
    It's not your average confession show: a panel of leading physicists spilling the beans about what keeps them tossing and turning in the wee hours. That was the scene a few days ago in front of a packed auditorium at the Perimeter Institute, in Waterloo, Canada, when a panel of physicists was asked to respond to a single question: "What keeps you awake at night? The discussion was part of "Quantum to Cosmos", a 10-day physics extravaganza, which ends on Sunday.
  • Womb transplant 'years away'
    The reported two-year estimate for the first human womb transplant is overly optimistic. There are several major hurdles to overcome before this could be considered ready for trials in humans. It would also involve a series of operations, carrying all of the usual risks, plus ones that are as yet unknown, for a non life-threatening condition.
  • An Open Letter to Future Bioethicists
    I have thought about their question quite a bit. I have come to realize that the answer is not the same for everyone who presents the questions. But, the core of the answer is pretty much the same; pursue masters level training in bioethics, acquire familiarity with key social science methods and tools, learn something about a particular sub-area of the health sciences or life sciences and, seek out every opportunity to fine tune your analytical and rhetorical skills by working with others on projects, research, consulting, or teaching activities. At its heart bioethics is an interdisciplinary activity and knowing how to work with others who do empirical, historical, legal and normative work is a must.

October 24, 2009

Link dump: 2009.10.24

From the four corners of the web:

October 23, 2009

Remembering Mac Tonnies

Our relationship got off to a rather fiery start.

Back in 2006 I discovered that a prominent UFOlogist had been linking to a number of articles on my blog. Even more startling was the realization that the blog in question, Posthuman Blues, was an effort to bridge transhumanist discourse with that of the UFOlogists.

Eager to break the memetic linkage between the two seemingly disparate schools of thought, I penned the article, "Unidentified Flying Idiots." It was typical of my rants, a vitriolic diatribe directed against a group of know-nothing X-Files zealots who were giving legitimate scientific studies a bad name.

And my angst was directed at the head perpetrator himself: Mac Tonnies. In the article I wrote:
Tonnies’s legitimate content is offset by his misguided focus on UFOlogy. As a result, the transhumanist movement may have a harder time gaining public acceptance and support with this kind of negative association.

I’m sorry, folks, but you can’t have your cake and eat it to. You can’t choose and pick the science that appeals to you and then attempt to tie it in with bogus and unfounded speculations. It's like Fox Mulder in the X-Files who has a poster on his wall which reads, "I want to believe." Well, I also want to belive in UFOs. I also want to belive in Jesus and the tooth fairly, but wanting to believe in those things ain't gonna make it so.
Needless to say it did not impress the UFOlogists who reacted by hammering me in the comments and through emails -- including Tonnies who remarked, "A very poor showing, George. I'm as leery of the lunatic fringe as anyone -- probably more so than many people unfamiliar with the UFO inquiry. But the whackos -- and they are legion -- don't define the very real questions posed by UFOs and related phenomena."

You'd think that would be the end of it, but it actually marked the beginning of a three year relationship, one that ended tragically this week with Mac's untimely death.

Rather than dismiss each other outright, we maintained a civil correspondence over the years. Neither of us wavered from our positions, but we bonded over our shared passion for the answer to one simple question: What is the true nature of extraterrestrial life? Moreover, we felt that the answer to this profound question resided somewhere within futurist studies. By trying to look deep into humanity's future we both thought that we could unveil some clues about the makeup of advanced extraterrestrial life.

But where I considered such things as Von Neuman probes and Dyson Spheres, Tonnies looked to flying saucers and little green men. To be fair, though, Mac's viewpoint was more sophisticated than that. "I'm uncomfortable with the concept of "belief" when dealing with unusual topics such as UFOs," he wrote, "Technically, no, I don't "believe" in UFOs (in the sense that UFOs require any sort of faith). But it isn't necessary to "believe" in UFOs in order to discuss them meaningfully; empirical evidence shows that UFOs (whatever they are) exist. There are plenty of professional "skeptics" who will refute this. But there remains a core phenomenon that begs disciplined study."

There's no question that his views were 'out there.' Mac argued that aliens once resided on Mars and that they currently dwell among us.

But despite these fringe views, Mac's futurism was closely aligned with that of the transhumanists (another group often accused of being 'fringe'). Tonnies contended that,
Consciousness is a potential technology; we are exquisite machines, nothing less than sentient patterns. As such, there's no convincing technical reason we can't eventually upload ourselves into matrices of our design and choosing. It's likely the phenomenon we casually call "intelligence" will cease to be strictly biological as we begin to merge with our machines more meaningfully and intimately. (Philip K. Dick once wrote that "living and nonliving things are exchanging properties." I suspect that in a few hundred years, barring disaster, separating the animate from the inanimate will probably be an exercise in futility.) Ultimately, we have two options: self-mutate by venturing off-planet in minds and bodies of our own design, or succumb to extinction.
Tonnies thought he was on the right side of science and history. "I spend an inordinately large portion of my time pursuing unpopular ideas and esoteric theories with what I sincerely hope is balanced skepticism," he wrote. The perpetual outcast, Tonnies envisioned himself as being the true skeptic.

Mac and I never really did see eye-to-eye when it came to our theories on the true nature of extraterrestrial intelligence, but our differing viewpoints didn't get in the way of our mutual respect. We became good friends via social media, with Twitter acting a great go-between. We frequently sent each other links and article ideas that didn't quite fit into our own communities and we regularly riffed off each other's sites. Deep down inside I think we both hoped that, through open and mature discourse, our ideas would eventually sway the other. But it was not to be.

I will sorely miss Mac's posts and unwavering commitment the problem that is the Great Silence.

And I'm devastated to know that he will never take part in the future he so often dreamed of.

October 22, 2009

Pigliucci on science and the scope of skeptical inquiry

Russell Blackford is a guest blogger for Sentient Developments.

Over on his Rationally Speaking blog, Massimo Pigliucci has an interesting post on the nature and scope of skeptical inquiry. He is particularly keen to nail down the relationships between scientifically-based skeptical thought, political philosophy, and philosophy of religion (he actually says "atheism", but I think this is a mistake). Pigliucci is a biologist and a philosopher, and these are his three main areas of intellectual interest.

To illustrate his points, Pigliucci introduces a diagram that shows skepticism overlapping with both "atheism" and political philosophy, though they do not overlap with each other. On this diagram, all three fall into a larger realm of critical thinking and rational analysis. Although it's a neat diagram, I think that it (along with the analysis that it illustrates) is somewhat misleading, and in at least one respect even wrong.

On skeptical inquiry

First, however, let's consider something that Pigluicci clearly gets right. He says:

Skeptical inquiry, in the classic sense, pertains to the critical examination of evidential claims of the para- or super-normal. This means not just ghosts, telepathy, clairvoyance, UFOs and the like, but also — for instance — the creationist idea that the world is 6,000 years old. All these claims are, at least in principle, amenable to scientific inquiry because they refer to things that we can observe, measure and perhaps even repeat experimentally. Notice, of course, that (some) religious claims do therefore fall squarely within the domain of scientific skepticism. Also in this area we find pseudohistorical claims, such as Holocaust denial, and pseudoscientific ones like fear of vaccines and denial of global warming. Which means of course that some politically charged issues — like the latter two — can also pertain properly to skeptical inquiry.

I'm with him completely on this. Claims about ghosts, the age of the Earth, and pseudohistorical or pseudoscientific theories, are all within the ambit of skeptical inquiry as so defined. Skeptical inquiry in the sense under discussion is not about taking positions that are in the minority. It is about rational inquiry into various claims, popular or otherwise, using the means available not only to science but also to such fields as history. Accordingly, when someone claims to be a "climate change skeptic" she is using words in a different sense.

It is always possible, of course, that a view with widespread, or even consensus, support from current science is nonetheless incorrect. Still, skepticism in the sense that Pigliucci is discussing is not about challenging the majority position. It is about rational investigation, especially of extraordinary claims - extraordinary in the sense that they fit badly, or not at all, with the best picture of the world built up through science, scholarship, and ordinary observation.

In particular, we are not talking here about some kind of radical epistemological skepticism, such as Descartes wrestled with and sought (unsuccessfully) to transcend or escape. Nor are we talking about skepticism as regards the status quo of scientific and scholarly knowledge. That sort of skepticism is possible, of course, and it may sometimes be justified. However, it is not legitimate to act as a skeptic merely in this sense, while attempting to get approbation for being engaged in skeptical inquiry in the different, and quite familiar, sense that Pigliucci describes.

Accordingly, I think that Pigliucci is correct when he later denounces the practice of "using the venerable mantle of skepticism to engage in silly notions like denying global warming or the efficacy of vaccines." As he says, "That’s an insult to critical analysis, which is the one thing we all truly cherish."

Pigliucci is also quite correct to show an overlap between atheism and skeptical inquiry, although atheism is a substantive position, not a field of inquiry, so he should really have written "philosophy of religion". He is correct that what philosophers of religion do when they investigate religious claims, such as those about the existence of various gods, overlaps with scientific skepticism or skeptical inquiry. I think, however, that he unnecessarily deprecates the extent of this overlap. This I'll return to.

On political philosophy

As for political philosophy, Pigliucci sees this too as overlapping with skeptical inquiry. After all, he says, some skeptical inquiry (e.g. into the claims of holocaust denialists) has implications for political philosophy.

This seems to be correct. However, he doesn't seem to have noticed that philosophy of religion may also have implications for political philosophy, and vice versa. For example, some religious positions, if correct, have definite implications for the role of the state. After all, various comprehensive worldviews based on religion claim that the state should enforce religious systems of morality or law; these worldviews are starkly opposed to liberalism and pluralism.

Less obviously, it is at least conceivable that a political position on an issue such as social justice could have implications for whether we should accept certain religious positions. We might develop a politically-based theory of justice, then ask, "Does the world seem to have been created by a just God?" Surely the answer could feed back into at least some views about the existence or nature of God. In any event, the diagram seems to be wrong, not just misleadingly presented, when it shows no overlap between political philosophy and "atheism".

Science and philosophy

While this may be the only error, strictly speaking, there are other problems with the analysis. They emerge when Pigliucci tries to defend the view that atheism is a philosophical position, rather than a scientific one. There is a sense in which this is clearly, but rather trivially, true. The issue of God's existence is, after all, examined by philosophers of religion, and not usually by biologists or physicists, and the pedagogical and other decisions that have led to this have not been merely arbitrary. But there's also a sense in which Pigliucci's account is misleading. Here's how he attempts to persuade us:

Now, I have argued of course that any intelligent philosopher ought to allow her ideas to be informed by science, but philosophical inquiry is broader than science because it includes non-evidence based approaches, such as logic or more broadly reason-based arguments. This is both the strength and the weakness of philosophy when compared to science: it is both broader and yet of course less prone to incremental discovery and precise answers. When someone, therefore, wants to make a scientific argument in favor of atheism — like Dawkins and Jerry Coyne seem to do — he is stepping outside of the epistemological boundaries of science, thereby doing a disservice both to science and to intellectual inquiry. Consider again the example of a creationist who maintains in the face of evidence that the universe really is 6,000 years old, and that it only looks older because god arranged things in a way to test our faith. There is absolutely no empirical evidence that could contradict that sort of statement, but a philosopher can easily point out why it is unreasonable, and that furthermore it creates very serious theological quandaries.

The difficulty here should be obvious. Scientist do use logic and "more broadly reason-based arguments"; they do so all the time. Much of science proceeds by processes that include logical deduction, and there are no a priori boundaries to the kinds of "broadly reason-based arguments" that scientists can use.

Let me qualify that: there may be some claims that should be conceded as lying outside of science. These may be more a matter of the historical construction of science as a set of institutions than anything else, but I'll not press that issue. Instead, let's agree, for the sake of argument, that scientific reasoning alone cannot give us correct values or correct moral norms. Let's also assume that such things as correct values and moral norms actually exist - though there's much to be said here - but that science alone cannot provide them.

It's also strongly arguable that science is unable to deliver correct statements about fundamental epistemological principles. Take, for example, a principle such as, "All truths except this one are truths that are known through science." A claim like that, whatever its other features, does not seem to be known through science. Nor does its negation seem to be known through science. I'll assume, then, that some meaningful and rational discussion of epistemological issues lies beyond the boundaries of science.

Still, this is not the sort of example that Pigliucci offers. Instead, he begins with the claim that the universe is really 6,000 years old. Science has, of course, produced plenty of evidence that this is just false, that the universe is more like 13 to 14 billion years old. Our own planet is roughly 4 to 5 billion years old. All of this surely counts against the claim that the universe is really only 6,000 years old. Pigliucci is quite correct to see this as an example of science falsifying a religious claim, and I suspect he'd think there are many such examples. Furthermore, he doesn't try to assert, in the fashion of Stephen Jay Gould, that there are some kinds of claims that it is illegitimate for religion to make. Thus, quite correctly in my view, he does not accept the principle of Non-Overlapping Magisteria.

However, what if somebody replies that God arranged for the Earth to look far older than it really is, in order to test our faith? Here, Pigliucci thinks that science (and hence skeptical inquiry) reaches a limit. He claims, in effect, that philosophers have a reply, whereas scientists must stand mute.

I disagree with this. The scientist is quite entitled to reject the claim, not because it makes falsified predictions or conflicts directly with observations (it doesn't) but because it is ad hoc. It is perfectly legitimate for scientists working in the relevant fields to make the judgment that a particular hypothesis is not worth pursuing, and should be treated as false, because it has been introduced merely to avoid falsification of a position that is contrary to the evidence.

Scientists might take some interest in claims about a pre-aged Earth if they were framed in such a way as to make novel and testable predictions, but as long as all such claims are presented as mere ad hoc manoeuvres to avoid falsification of the claim that the universe is really 6,000 years old, a scientist is quite entitled to reject it. A philosopher should reject it for exactly the same reason. Philosophers don't have any advantage over scientists at this point.

Thus, Pigliucci is unnecessarily limiting the kinds of arguments that are available to scientists. He writes as if they are incapable of using arguments grounded in commonsense reasoning, such as arguments that propose we reject ad hoc thesis-saving hypotheses.

That's not to say that the resources of science never run out. But when they do it is often for merely practical reasons. For example, it may be because of because a problem that confronts us requires that we consider points that scientists are, in practice, not well-trained to consider. If that's the problem, it's a matter of pragmatic division of labour, not of an epistemological resource that's out of bounds to scientists in principle.

Accordingly, we might have good reason to say that scientists, as a class, are not that well-trained to solve puzzles that arise within philosophy of religion. But it doesn't follow that any specific scientist - Richard Dawkins, say - is poorly equipped to do so by his training and study. Nor does it follow that whatever arguments Dawkins uses are "not scientific". They may be shared with philosophers, but it by no means follows that they are out of bounds for use by scientists. They may not be distinctively scientific, but that's another matter.

Moreover, it is possible that certain arguments that are legitimately open to scientists to develop might turn out to be decisive, one way or another, with respect to issues in philosophy of religion. Pigliucci says: "When someone, therefore, wants to make a scientific argument in favor of atheism — like Dawkins and Jerry Coyne seem to do — he is stepping outside of the epistemological boundaries of science, thereby doing a disservice both to science and to intellectual inquiry." But we can't know that in advance. It's certainly not a truism which we're compelled to accept.

Two examples

It might help to consider some contrasting examples. First, suppose that a cryptozoologist claims that a gigantic, previously undiscovered species of ape lives in the forests of New England (I'm thinking of the location in North America, not the identically-named location in Australia, or any other place with the same name). I assume that it would be pretty straightforward to work out what would be good evidence for or against the existence of this new species - what kinds of observations we would need to make to confirm its existence directly, what kinds of observations would pretty much preclude its existence, and what observations would be inconclusive. It wouldn't be too hard, at least in principle, to get together a group of zoologists, ecologists, and the like, to investigate the matter. Thus, no one doubts that the existence or otherwise of this spectacular New World primate is a scientific question.

What, however, if the claim is made that a Jewish apocalyptic prophet performed miracles during the first century of the Common Era? This looks like a job for historians - thus we immediately assign it to folks in the Faculty of Arts, rather than the Faculty of Science. The historians are likely to ask for historical evidence of the existence of this prophet and of his alleged miracles. Surely that's reasonable? This may involve (among other things) investigating various documents that supposedly record the prophet's acts, including the miraculous ones. How should the historians proceed?

Well, it will be a bit complicated, though perhaps no more so than the job of the scientists looking for the giant ape.

The historians might wish to establish, using a variety of means available to them, whether the documents were contemporary with the events described. They might examine the documents to try to determine whether they were originally created in their current form, or whether some parts are older, and perhaps more reliable than others. They might attempt to determine whether any of the events recorded in the documents are of such a nature that, if they really happened, they would have been recorded in secular texts of the time. For example, the documents might claim that on such and such a day five hundred long-dead corpses rose from a major cemetery and wandered the streets of Rome, accosting sinners and soldiers. Historians can check whether any of the secular historical texts and other unbiased records describe such an event.

They might also check carefully to see whether the documents are internally consistent and consistent with each other, and the nature of the inconsistencies if any are found. They might take into account whatever is known about the propensity for the lives of prophets to be mythologised, in the sense that the truth is embroidered with (false) accounts of miracle working. They might look to forensic psychologists, among others, for knowledge of when and how people come to believe things (and even to believe they saw things) that turn out to be false.

Many of the skills needed to do all this (including language skills) are taught in arts faculties rather than science faculties. And yet, there is nothing in the kinds of investigations that the historians will be involved in, or the kinds of arguments that they will use in attempting to settle the issue, that is conceptually remote from scientific reasoning. The same sort of logic will be employed; ad hoc hypotheses will be rejected; facts will be weighed.

It's true, of course, that the job will be assigned to people who are well trained in interpreting the nuances of language and the effects of culture, rather than in (for example) mathematics and the conduct of experiments. On the other hand, some scientific apparatus might be used, such as computers programmed to analyse texts to help determine whether they were written by the same person. Hypothetico-deductive reasoning might be relied on at various points. Most importantly, none of the techniques that I am describing are totally unavailable to scientists - it's more a question of emphasis in training. It makes sense to call the investigation a "scientific" one, even though conducted by people employed within arts faculties.

Or we might say that it's an issue for historians, not scientists, while adding that there is no radical difference between the epistemological resources of history and science. It's just that different emphases in training and skill mixes tend to be needed, for everyday purposes, by scientists and historians. If someone had all these skills, they would complement each other and mesh together just fine. When we talk about the methods of scientists and compare those of historians, there are no radically different "ways of knowing" involved. Moreover, there is no reason in a case like this why the historical evidence and arguments should be considered anything less than decisive.

What about philosophers?

Imagine that a philosopher seeks to investigate whether a divine being created the Earth. In that case, she might be faced with evidence of many kinds. For example, one item of alleged evidence, among the many, might be the claim that a Jewish apocalyptic prophet who performed miracles in the first century of the Common Era claimed to be the son of this being. The philosopher might conclude that the alleged testimony of the apocalyptic prophet would carry weight if: (1) he really existed and said what is recorded, and; (2) he really did perform the alleged miracles.

In checking into this evidential issue, the philosopher is likely to ask for help from historians, at least in the first instance, rather than from scientists, thus keeping the investigation within the Faculty of Arts. But, let's remember, the historians will not be using techniques or arguments that are radically foreign to science and scientists - they have a different skill mix but not a radically different way of knowing.

What if the apocalyptic prophet were alleged to have made various claims that are in conflict with current science, e.g. that the Sun is a ball of white hot metal circling the Earth? A philosopher might take this as evidence (perhaps not strong evidence, but still ...) that the prophet was all too human and not, in fact, the child of a divine being. Note, however, that she would depend on scientists to tell her that the claim is, in fact, incorrect, and on historians to tell her whether it is likely that the prophet really said what is attributed to him. At no stage in this inquiry - at least no stage discussed so far - does the philosopher do anything that's radically foreign to the scientific reasoning.

In the upshot, the question about a divine being who created the Earth is likely to involve input from many disciplines, with people who have many different skill sets providing relevant data and sub-conclusions. The beleaguered philosopher must sort out highly complex arguments using all this material, while (probably) not having the skills herself to undertake the textual analysis performed by the historian, or the physical experiments that were performed by scientists in the past when they discovered the true nature of the Sun. But she has not yet used arguments or evidence that are beyond those available, in principle, to scientists. It's simply a matter of division of labour within academic institutions and the availability of people with different skill sets.

Accordingly, a question about the existence of a divine creator is different, in a practical way, from a question about an unknown species of gigantic ape in New England. Whereas the latter can be assigned to scientists from a small group of relevantly related disciplines, the latter may call on data and conclusions from many disciplines, across faculty boundaries, and involving many different skill sets. The overall argument may be extremely complicated in the sense that there are many sub-arguments (from many disciplines) feeding into it. Accordingly, this kind of argument gets assigned to philosophy, the repository for arguments that involve many considerations (and sub-conclusions) from many fields.

But, while that is a reason to say that a question such as this is philosophical, it still does not follow that any of the reasoning done is unavailable to scientists who are broadly enough trained. It is simply that some of the skills depended on at different points in the overall argument come from people with training that scientists don't usually have - e.g. advanced knowledge of ancient languages.

Not only that, but some of the sub-conclusions derivable from science might turn out to be decisive. If we're told enough about the God concerned, we might be able to deduce that it doesn't exist (or that it does) purely on the basis of data and arguments that are available to scientists, without even calling in the historians to help establish what took place in the Middle East 2000 years ago. Thus, Pigliucci is wrong when he suggest that atheism cannot, as a matter of principle, be established by scientific arguments. Whether or not it can be, in respect of one god or another, remains to be seen.

For example, consider the claim that an all-benevolent, all-powerful, all-knowing God exists, and has existed from eternity. It is well within the skills of scientists to give this consideration, deduce what kinds of events would contradict the claim, and look for evidence of such events - e.g. evidence of nature red in tooth and claw, the existence of horrible pain experienced by sentient creatures, and that much of this has nothing to do with any exercise of free will by human beings.

Although there is no science that is specifically charged with investigating the existence of such a deity, there easily could be an interdisciplinary effort by various scientists, particularly including biologists, that justifiably concludes that a god of this kind does not exist. If a theist who supports the existence of a god of this kind resorts to ad hoc manoeuvres, the scientists will be well equipped to recognise them as such.

None of this is to deny that some of what goes on in philosophy is different from what goes on in any scientific discipline. But it is not known in advance that any of these things will be required to settle, decisively, the truth of a particular religious claim.

Perhaps, however, there could be religious claims that it is possible to settle only if we first settle issues of morality or fundamental epistemology that lie outside of science. Accordingly, there is a possibility that some claims about the existence of a god will require sub-conclusions that seem to lie beyond the scope of science. Thus, we can't guarantee that all questions about the existence of a God or gods are decisively resolvable by science, or by methods (such as those of historical-textual scholars) that are allied with it.


We should come to a weaker conclusion than Pigliucci's. Pace Pigliucci, it is not wrong in principle to put scientific arguments for atheism (or for theism). It cannot be ruled out in advance that the kinds of arguments used by scientists will be decisive.

Even if the scientific arguments are not decisive by themselves, they may be when taken in conjunction with other considerations. In that case, they may still be of crucial importance in reaching an atheistic (or, indeed, theistic) conclusion and in that case it appears unfair to criticise somebody like Richard Dawkins for overstepping the bounds.

After all, philosophers are forced to draw upon resources from other disciplines. Why can't a biologist do likewise, obtaining important data and sub-conclusions from his own field, while also relying on input from (say) historians and philosophers for the full argument? If we accept that picture, scientists in the relevant field(s) do have an advantage over people with no scientific training. The advantage will consist in a the possession of both a useful knowledge base and the skills in developing relevant kinds of arguments. While the ultimate conclusion may turn out to require assistance from, say, historians or philosophers, that does not render scientific qualifications irrelevant.

In any event, Pigliucci is surely correct about one thing: the questions relating to theism, atheism, and philosophy of religion in general, should be investigated rationally. Philosophers, historians, and various kinds of scientists may all have a role to play in that investigation (though it is still possible that one or other set of arguments by itself will be decisive). There is no "way of knowing", lying somewhere beyond the realm of rational inquiry, that can solve the problem for us. We are left with our reason and intelligence, and the ongoing advance of knowledge.

But possessing those is no small thing. It's something we must always celebrate, the only key to a (post)human future on or beyond our blue-green Earth.

Russell Blackford's home blog is Metamagician and the Hellfire Club. He is editor-in-chief of The Journal of Evolution and Technology and co-editor, with Udo Schuklenk, of 50 Voices of Disbelief: Why We Are Atheists (Wiley-Blackwell, 2009).

October 19, 2009

Oklahoma and abortion - some fittingly harsh reflections

Russell Blackford is a guest blogger for Sentient Developments.

The Oklahoma legislature has passed a draconian statute that provides for personal details about women who obtain abortions to be placed online at a public website. To be fair, the information does not include actual names; nonetheless, it is in such detail that many women could be identified and harassed. What's more, even if no harassment eventuates, a woman's privacy is violated cruelly if she is required to provide such details as these:

answers to 34 questions including their age, marital status and education levels, as well as the number of previous pregnancies and abortions. Women are required to reveal their relationship with the father, the reason for the abortion and the area where the abortion was performed.

Much can be said about this despicable and cruel law, all of it in tones of fitting outrage. Over at Metamagician and the Hellfire Club, I concentrated on an important technical aspect. This is that outright criminalisation of a practice is not the only means that the state can use in its efforts to suppress the practice. There are many ways that political power can be used to attack our liberties.

In current social circumstances prevailing in Western societies, the criminal law uses punishments that can include the infliction of a range of harms, such as loss of liberty or property, while also expressing public resentment, indignation, reprobation, and disapproval (Joel Feinberg has written well on this). But much the same infliction of harm and officially-sanctioned stigma could be accomplished by means that do not involve criminalisation of an activity or even the criminal justice system as we know it.

Although civil laws do not categorise those who breach them as criminals, they, too, can be used to attach a stigma to actions and to individuals, and even to destroy reputations and careers. In fact, the state can select many hostile and repressive means to achieve its aims. These include propaganda campaigns that stigmatise certain categories of people and officially-tolerated discrimination against people of whom it disapproves, such as by denying certain categories of people access to government employment. The state requires good justification before it calls upon its power to suppress any form of conduct by any of these means.

The Oklahoma law is clearly intended to intimidate and stigmatise women who have abortions, in an attempt to deter the practice. This is no more acceptable than outright criminalisation of abortion. The legislature's action merits our contempt, and the law concerned should be struck down as unconstitutional for exactly the same reason as apply to an outright ban on abortions - it intrudes into an area of life that should be governed by personal privacy and individual choice.

Beyond this point, however, lies a further issue about the motivation for such laws ... and how are they best fought in the long term. It's not coincidental that laws such as this, which presuppose that pregnant women should have little control over their own bodies, tend to be enacted in jurisdictions where theistic religion is strong. We can insist that religious reasoning should have no authority in matters of law, but in societies where deference to religion is taken for granted that claim is likely to fall on deaf ears. It can be difficult convincing a hard-line Catholic bishop or a Protestant fundamentalist that political force should be used only in ways that are neutral between peaceful worldviews. Why accept that idea if you see your own worldview as representing the comprehensive truth about the world, morality, and the organisation of society? In societies where religion goes unchallenged, secular principles such as separation of church have no traction.

For that reason, I've increasingly, over the past few years, come to the view that there's now some urgency in challenging the truth claims, epistemic authority, and moral wisdom of theistic religion - meeting its pretensions head-on. This urgency wouldn't exist if the various leaders, churches, and sects agreed, without equivocation, to a wall of separation between themselves and the state. But that's not so likely, since many of them can find reasons, by their own lights, to resist any sort of strict secularism.

John Rawls imagined that adherents to most religions and other comprehensive worldviews could find reasons - from within their own teachings and traditions - to reach an overlapping political consensus. In Rawlsian theory, they can all find their own reasons to support a kind of secularism in which the state would not impose any comprehensive worldview; rather, it would provide a regulatory and economic framework within which people with many views of the world or "theories of the good" could live in harmony. But many of the comprehensive religious worldviews do not lend themselves to this. It's not natural for them to find reasons in their traditional teachings to embrace Rawlsian political liberalism. The more apocalyptic churches and sects do not seek social harmony with adherents of other views of the good; instead, they imagine a future time when their own viewpoint will prevail, perhaps with divine assistance.

Even the more mainstream religious groups may be sceptical about any sharp distinction between individual salvation and the exercise of political power. They may be suspicious about social pluralism, if this includes, as it must, views deeply opposed to their own - such as the view that abortion is morally acceptable, or the more radical view that it is not even (morally speaking) a big deal.

In short, I continue to advocate secular principles, including a strong separation of church (or mosque) and state. But it is not enough to stop with advocating these principles when so many people do not accept the premises on which they are based, such as the right to freedom of belief, the unvoidable permanence of social pluralism, or the need to obtain public peace by some reticence in struggling to impose private views of morality. We should go further than arguing for secularism, I think, and openly criticise worldviews that lead to travesties such as the Oklahoma abortion laws.

This may involve asking pointed questions such as whether the God who hates abortion even exists. That's okay. In a free society, we have every right to question views that we disagree with, so long as we don't attempt to suppress them by state power. In the case of abortion, you can believe it's a sin, and you can subscribe to a religion that supports your belief. But the cure for abortions is not to suppress them by the cruel use of state power. If you're against abortions, don't have one. Don't try to stop those who disagree.

If you do, don't be surprised if you are challenged on where you got your beliefs from, and whether they are well-evidenced. Maybe your religious tradition is riddled with error, and maybe your God doesn't exist. We're entitled to ask for the evidence and draw our own conclusions if it's not forthcoming.

Russell Blackford's home blog is Metamagician and the Hellfire Club. He is editor-in-chief of The Journal of Evolution and Technology and co-editor, with Udo Schuklenk, of 50 Voices of Disbelief: Why We Are Atheists (Wiley-Blackwell, 2009).

Link dump: 2009.10.19

From the four corners of the web:
  • How to save yourself from chasing futuristic red herrings
    For many people, the often outlandish proposals and predictions of futurists are just obviously impractical and are to be laughed off. This attitude, irrational is it may seem to futurists of the stripe who take outlandish ideas very seriously, is itself not to be sneered at -- automatic unbelievers in the alien save themselves from chasing many red herrings. Those who laugh at futurism because they are unimaginative dolts I will not try to defend, but those who laugh at futurism when futurists take themselves too seriously are usually spot-on.
  • The Top 10 Artificially Intelligent Characters in Movies
    As of now, no machine has "passed" the Turing test. In the movies, however, that's a completely different story, as we've seen literally dozens of artificially intelligent characters whose programming functions at a level that is indistinguishable from that of a human brain.
  • Small mechanical forces have big impact on embryonic stem cells
    Applying a small mechanical force to embryonic stem cells could be a new way of coaxing them into a specific direction of differentiation, researchers at the University of Illinois report. Applications for force-directed cell differentiation include therapeutic cloning and regenerative medicine.
  • Progress in Bioethics - MIT Press [book]
    Progress in Bioethics is the first book to debate the meaning of progressive bioethics and to offer perspectives on the topic both from bioethicists who consider themselves progressive and from bioethicists who do not. Its aim is to begin a dialogue and to provide a foothold for readers interested in understanding the field.
  • Is My Mind Mine? - Forbes.com
    How neuroimaging will affect personal freedom.
  • Prospective Parents and Genetic Testing
    Dr. Jennifer Ashton Discusses People Turning to Science to Try to Avoid Genetic Disorders in Future Children
  • Need a New Heart? Grow Your Own. - The Boston Globe
    The idea sounds like science fiction. But it might someday come true. A group of Boston scientists is pushing the bounds of regenerative medicine.
  • Drug testing could stop 'academic doping'
    Students taking important exams could one day find themselves in the same position as professional athletes -- submitting to a drug test before the big event. The practice of students taking cognitive-enhancing drugs, such as methylphenidate, has become so common that those who don't "dope" are at an unfair advantage, argues a psychologist writing in the new issue of Journal of Medical Ethics.
  • Physicists Calculate Number of Universes in the Multiverse
    If we live in a multiverse, it's reasonable to ask how many other distinguishable universes we may share it with. Now physicists have an answer

And Now, for Something Completely Different: Doomsday!

Casey Rae-Hunter is guest blogging this month.

I've certainly been having a wonderful time guest blogging here at SD. In fact, it's hard to believe the month is almost up. Since I started late, maybe the boss will give me an extension?

Having talked about heavy stuff like cognitive liberty, neurodiversity and my personal stake in such matters, I figured we might want to tackle a lighter subject. Like doomsday devices.

It's probably old news by now, but I was really taken by an article in last month's issue of WIRED, called "Inside the Apocalyptic Soviet Doomsday Machine." As a child of the 1970s and '80s, I remember fondly the thrill of itemizing Soviet and American nuclear arsenals and learning cool new terms like "Mutually Assured Destruction." My parents and grandparents were not as wowed by my obsession with atomic game theory, but they put up. From forensics to German expressionist films to how many megatons are in an MX missile. . . such is life with a precocious and somewhat morbid kid.

Not to make this post purely personal, but there was another reason for my obsession. I grew up in Maine — a rural US state that one wouldn't think as having anything to do with the nuclear arms race. To the contrary: America's easternmost, northernmost province was positively riddled with backscatter radar stations, whose purpose was to detect a Soviet first strike. This made Maine more likely to be dusted in an initial attack than, say, Washington, DC. . . where I currently live.

Yet as much information as my apocalypse-obsessed mind could consume, I never encountered any tales of Perimeter — a Soviet doomsday system that came online in the mid-'80s, and is apparently still at the ready. Perimeter, also known by the more chilling moniker, "Dead Hand," is among the most secret and mystifying artifacts of the Cold War. Most perplexing is the fact that even the highest-level US officials, past and present, have no knowledge of its existence.

Yet it exists. Very much so.

The author of the WIRED piece, Nicholas Thomson, tells the tale of Perimeter with the panache of a noir novelist. He reveals, through painstaking first-person research and some rather uncomfortable interviews with Soviet and American principals, how the Ruskies devised a doomsday device that could still obliterate the US even after a devastating American first strike.

Perimeter ensures the ability to strike back, but it's no hair-trigger device. It was designed to lie semi-dormant until switched on by a high official in a crisis. Then it would begin monitoring a network of seismic, radiation, and air pressure sensors for signs of nuclear explosions. Before launching any retaliatory strike, the system had to check off four if/then propositions: If it was turned on, then it would try to determine that a nuclear weapon had hit Soviet soil. If it seemed that one had, the system would check to see if any communication links to the war room of the Soviet General Staff remained. If they did, and if some amount of time—likely ranging from 15 minutes to an hour—passed without further indications of attack, the machine would assume officials were still living who could order the counterattack and shut down. But if the line to the General Staff went dead, then Perimeter would infer that apocalypse had arrived. It would immediately transfer launch authority to whoever was manning the system at that moment deep inside a protected bunker—bypassing layers and layers of normal command authority. At that point, the ability to destroy the world would fall to whoever was on duty: maybe a high minister sent in during the crisis, maybe a 25-year-old junior officer fresh out of military academy. And if that person decided to press the button ... If/then. If/then. If/then. If/then.

Most interesting to me is the author's dead-on analysis of Ronald Reagan's "Star Wars" missile defense system, which the Soviets viewed as less of a "shield" than act of sheer provocation:

Reagan announced that the US was going to develop a shield of lasers and nuclear weapons in space to defend against Soviet warheads. He called it missile defense; critics mocked it as "Star Wars."

To Moscow it was the Death Star—and it confirmed that the US was planning an attack. It would be impossible for the system to stop thousands of incoming Soviet missiles at once, so missile defense made sense only as a way of mopping up after an initial US strike. The US would first fire its thousands of weapons at Soviet cities and missile silos. Some Soviet weapons would survive for a retaliatory launch, but Reagan's shield could block many of those. Thus, Star Wars would nullify the long-standing doctrine of mutually assured destruction, the principle that neither side would ever start a nuclear war since neither could survive a counterattack.

As we know now, Reagan was not planning a first strike. According to his private diaries and personal letters, he genuinely believed he was bringing about lasting peace. (He once told Gorbachev he might be a reincarnation of the human who invented the first shield.) The system, Reagan insisted, was purely defensive. But as the Soviets knew, if the Americans were mobilizing for attack, that's exactly what you'd expect them to say. And according to Cold War logic, if you think the other side is about to launch, you should do one of two things: Either launch first or convince the enemy that you can strike back even if you're dead.

Wow, right? I mean, I don't want to spoil it for you if you haven't yet read it, which you should.

It's interesting to note that Dead Hand is still active. Still out there, its once finely-tuned sensors decaying alongside the other relics of the ex-Soviet military/tech apparatus, waiting for seismic and communications evidence of a major strike by a fiercely cultivated enemy.

Does anyone in their right mind feel safer?

Casey Rae-Hunter is a writer, editor, musician, producer and self-proclaimed "lover of fine food and drink." He is the Communications Director of the Future of Music Coalition — a Washington, DC think tank that identifies, examines, interprets and translates issues at the intersection of music, law, technology and policy. He is also the founder and CEO of The Contrarian Media Group, which publishes The Contrarian and Autistic in the District — the latter a blog about Asperger's Syndrome.

October 18, 2009

TED Talks: Henry Markram builds a brain in a supercomputer

This is another remarkable TED talk -- fascinating, incredibly informative and not without controversy. I'm overjoyed to hear an expert from IBM put forth a theory of mind that tries to address the problem of how the brain projects a representation of the universe to a subjective observer. I'm fairly convinced that his framing of the issue will yield some positive results.

Henry Markram says the mysteries of the mind can be solved in fairly short order. He argues that mental illness, memory and perception are all made of neurons and electric signals -- and he plans to find them with a supercomputer that models all the brain's 100,000,000,000,000 synapses.

Cognitive liberty and right to one's mind

We've been having a great discussion here at Sentient Developments on cognitive liberty and neurodiversity thanks to our guest blogger, Casey Rae-Hunter. Be sure to check out his recent posts, "Neuroplasticity and Coordinated Cognition: the Means of Self-Mastery?", "Neurodiversity vs. Cognitive Liberty", "Neurodiversity vs. Cognitive Liberty, Round II."

I'd now like to take a moment and address some issues as they pertain to cognitive liberty, a topic that I believe will start to carry some heavy implications in the near future.

Cognitive liberty is not just about the right to modify one's mind, emotional balance and psychological framework (for example, through anti-depressants, cognitive enhancers, psychotropic substances, etc.), it's also very much about the right to
not have one's mind altered against their will. In this sense, cognitive liberty is very closely tied to freedom of speech. A strong argument can be made that we have an equal right to freedom of thought and the sustained integrity of our subjective experiences.

Our society has a rather poor track record when it comes to respecting the validity of certain 'mind-types'. We once tried to "cure" homosexuality with conversion therapy. Today there's an effort to cure autism and Asperger's syndrome -- a development the autistic rights people have railed against. And in the future we may consider curing criminals of their anti-social or deviant behaviour -- a potentially thorny issue to be sure.

There are many shades of gray when it comes to this important issue. It's going to requiring considerable awareness and debate if we hope to get it right. Your very mind may be at stake.

Neuroethical conundrums

Forced cognitive modification is an issue that's affecting real people today.

Aspies for Freedom claims that the most common therapies for autism are exactly this; they argue that applied behaviour analysis (ABA) therapy and the forced suppression of stimming are unethical, dangerous and cruel, as well as aversion therapy, the use of restraints and alternative treatments like chelation. Jane Meyerding, an autistic person herself, has criticized any therapy which attempts to remove autistic behaviors which she contends are behaviors that help autistics to communicate.

As this example shows, the process of altering a certain mind-type, whether it be homosexuality or autism, can be suppressive and harsh. But does the end justify the means? If we could "cure" autistics in a safe and ethical way and introduce them to the world of neurotypicality should we do it? Many individuals in the autistic/Asperger's camp would say no, but there's clearly a large segment of the population who feel that these conditions are quite debilitating. Not an easy question to answer.

This is an issue of extreme complexity and sensitivity, particularly when considering other implications of neurological modification. Looking to the future, there will be opportunities to alter the minds of pedophiles and other criminals guilty of anti-social and harmful behaviors. Chemical castration may eventually make way to a nootropic or genetic procedure that removes tendencies deemed inappropriate or harmful by the state.

Is this an infringement of a person's cognitive liberty?

Neuroconformity vs. neurodiversity

Consider the deprogramming of individuals to help them escape the clutches of a cult. The term itself is quite revealing: notice that it's
deprogramming, not reprogramming -- a suggestion that the person is being restored to a pre-existing condition.

But what about those cases like pedophilia or autism where there is no pre-existing psychological condition for those persons, save for whatever mind-state society deems to be appropriate? This is the (potential) danger of neuroconformism, the evil flipside to neurodiversity. Without a broad sense and appreciation for alternative mind-types we run the risk of re-engineering our minds into extreme homogeneity.

Now I'm not suggesting that we shouldn't treat sociopaths in this way. What I'm saying is that we need to tread this path very, very carefully. Manipulating minds in this way will have an irrevocable impact on a person's sense of self. In a very profound way, a person's previous self may actually be destroyed and replaced by a new version.

For us Buddhists this doesn't tend to be a problem as we deny the presence of a singular and immutable self; what we can agree on, however, is that our agency in the world is heavily impacted by our genetics and environment which leads to a fairly consistent psychology -- what we call personalities and tendencies. In most cases, we tend to become attached to our personality and tendencies -- it's what we like to call our 'self.' And it's perfectly appropriate to want to retain that consistent sense of self over time.

So, if one applies a strict interpretation of cognitive liberty, a case can be made that a sociopath deserves the right to refuse a treatment that would for all intents-and-purposes replace their old self with a new one. On the other hand, a case can also be made that a sociopathic criminal has forgone their right to cognitive liberty (in essence the same argument that allows us to imprison criminals and strip them of their rights) and cannot refuse a treatment which is intended to be rehabilitative.

I am admittedly on the fence with this one. My instinct tells me that we should never alter a person's mind against their will; my common sense tells me that removing sociopathic tendencies is a good thing and ultimately beneficial to that individual. I'm going to have to ruminate over this one a bit further...

As for autistism, however, I'm a bit more more comfortable suggesting that we shouldn't force autistics into neurotypicality. At the very least we should certainly refrain from behavior therapy and other draconian tactics, but I have nothing against educating autistics on how to better engage and interact with their larger community.

And to repeat a point I made earlier, we should err on the side of neurodiversity and a strong interpretation of cognitive liberty. The right to our own minds and thoughts is a very profound one. We need to be allowed to think and emote in the way that we want; the potential for institutions or governments to start mandating to us what they consider to be "normal thinking" is clearly problematic.

So fight for your right to your mind!

October 17, 2009

Link dump: 2009.10.17

From the four corners of the web:
  • Wolfram Alpha's Second Act
    Following a sharp drop in interest, the "computational knowledge engine" pins hopes on API--and homework.
  • The Future of Supercomputers is Optical
    An IBM researcher gives a timeline for developing the next generation of supercomputers.
  • Google Profits Up 27% in Q3
    Google's quarterly profits jumped 27 percent, year over year, to $1.65 billion, marking a very strong showing in the third quarter of a tough year and outstripping analyst's predictions of results for the search and advertising giant.
  • Three Google Wave Searches Worth Saving - Searches - Lifehacker
    After only a few weeks of Wave usage, my inbox is full of waves from strangers and items I don't particularly care about. Rather than archiving everything in Wave, I'm going with the flow–with the help of saved searches.

October 16, 2009

Neurodiversity vs. Cognitive Liberty, Round II

Casey Rae-Hunter is guest blogging this month.

I've taken some hits on my recent post about the possible differences (semantic and conceptual) between neurodiversity and cognitive liberty. Some of them have happened outside of the hallowed Sentient Developments grounds, as one particular individual does not cotton to the Blogger/Google comments protocol here at SD.

Mostly, the arguments have centered on a), my lack of specificity in articulating clear differences between the two terms and b), my assumption that those with Aspserger's Syndrome may be using neurodiversity as an excuse to advocate for an aggressive "hands-off" approach to neurological governance.

I'm writing this follow-up post to (hopefully) better explain why I think that neurodiversity and cognitive liberty — while sharing some similarities — are quite different animals.

Perhaps the best way to do this is to not focus on neurodiversity, as it can mean quite a few different things depending on your politics. At this point in history, theories of cognitive liberty will no doubt sound Philip K. Dick-ian, but it's never too early to start pondering the ethical and regulatory frameworks that impact societal attitudes and individual outcomes. In fact, George did a great job of itemizing these issues just the other day.

In case you missed it, below are my initial Principles for Cognitive Liberty, which I have expanded and clarified. Below that is a paragraph that should better illustrate some differences between neurodiversity and cognitive liberty (keep in mind that there are plenty of similarities).

1. Cognitive liberty is the basic right of an individual to pursue potentially beneficial psychological/neurological trajectories. If the individual is unable to make these choices themselves, than it is the right of their closest family members to make them, provided they are not coerced by the medical establishment or prevailing social strata.

2. Cognitive liberty recognizes that information and education are key to making informed choices. In the absence of such information, cognitive libertarians will advocate for the fullest range of data in when considering treatment options or lifestyle planning.

3. Cognitive liberty recognizes the range of psychological profiles in both the neurotypical world and otherwise. Until and unless an individual's psychology can be determined as infringing on another individual's cognitive liberty, they are free to pursue or not pursue strategies for conventional adaptation, possible enhancement or any other cognitive application — actual or postulatory.

4. Cognitive liberty recognizes the right to pharmacological experimentation, within existing legal structures. Where those structures are not beneficial or unnecessarily inhibit potentially useful individual research, cognitive libertarians reserve the right to challenge legal frameworks (and, where appropriate and with full comprehension of the punitive risks, step beyond them).

5. Cognitive liberty recognizes the essential function of the governmental regulatory apparatus, but places others' cognitive liberty ahead of the societal, legal or bureaucratic status quo. Through education, research and advocacy, cognitive libertarians can and should present information to policymakers that will enhance governmental comprehension of current and emerging issues. Where decisions are made, they must be transparent and open to debate.

6. Cognitive liberty is not an outlier of the neurodiversity movement. It is a separate, but complimentary effort to enhance understanding about the range of possibilities in self-directed cognition.

Once again, let's look at why this is different than neurodiversity.

A) Neurodiversity does not necessarily include an ethical framework for enhancement or targeted augmentation.

B) Neurodiversity may not currently recognize the efficacy of ethical "uplift" for the benefit of enhanced (or even equal) powers of cognition. Cognitive liberty leaves room for these discussions, while not advocating specifically for one or another approach.

C) Neurodiversity offers a necessary framework for human rights within the neurological and psychological spectrum, in which neurological pluralism is part of a new social contract. Cognitive liberty is not in opposition to these tenets, but is perhaps more concerned with the essential right of sentient beings to play an active part in shaping their cognitive destiny by available means.

This post may open a whole 'nother can of worms, but I certainly embrace any conversation or debate it inspires!

Casey Rae-Hunter is a writer, editor, musician, producer and self-proclaimed "lover of fine food and drink." He is the Communications Director of the Future of Music Coalition — a Washington, DC think tank that identifies, examines, interprets and translates issues at the intersection of music, law, technology and policy. He is also the founder and CEO of the Contrarian Media Group, which publishes The Contrarian and Autistic in the District — the latter a blog about Asperger's Syndrome.

October 15, 2009

Link dump: 2009.10.15

From the four corners of the web:

October 14, 2009

Limits to the biolibertarian impulse

I've often said that transhumanism is supported and strengthened by three basic impulses, namely the upholding of our reproductive, morphological and cognitive liberties. Should any one of these be absent, the tripod cannot stand.

We transhumanists stand divided on any number of issues; put us in a room together and you're guaranteed to get an argument. But one aspect that unites virtually all of us is our steadfast commitment to biolibertarianism -- the suggestion that people, for the most part, deserve considerable autonomy over their minds, bodies and reproductive processes.

Granted, conceptions of what is meant by biolibertarianism varies considerably. I'm sure there are many transhumanists who feel that any state involvement in the development, regulation and implementation of transhumantech is completely unwarranted. But a number of transhumanists, including those of us who are affiliated with the Institute for Ethics and Emerging Technologies (IEET), believe there's more to it than that.

Safety checks

Indeed, these technologies are far too powerful to be left to unchecked market forces and the whims of individuals. Most companies and people can be trusted with such things, but there's considerable potential for abuse and misuse...things like the availability of dangerous and unproven pharmaceuticals, irresponsible fertility clinics, or parents who want to give their children horns and a devil's tail. Not cool. This is why the state will have to get involved.

Without safety and efficacy the biolibertarian agenda is facile. I strongly agree that we should allow market forces to drive the development of transhumantech, but state involvement will be necessary to ensure that these technologies are safe, effective and accessible. And in addition, governments will also need to ensure that individuals aren't harming themselves or others with these technologies.

All this said, I'll restate an earlier point: transhumanists tend to hold the biolibertarian conviction that informed and responsible adults have the right to modify their minds and bodies as they see fit and to reproduce in a way that best meets their needs. The state has no business telling people what they should look like, how they should reproduce or how their minds should work. Governments should only intervene in extreme cases, particularly when the application of these biotechnologies lead to abuse and severely diminished lives.

The need for tolerance

But even this is tricky. What do we mean by a 'dimished' life or self-inflicted harm? Who are we to decide which choices are permissable and which are not?

The key, in my opinion, will be to remain informed and open-minded. It will be important to understand why individuals choose to modify themselves in certain ways -- and accept it. We may not always agree, but we'll often need to tolerate.

And in so doing we'll be in a better position to uphold the rights of individuals to shape their lives and experiences as they best see fit.

October 13, 2009

Link dump: 2009.10.13

From the four corners of the web:

Neurodiversity vs. Cognitive Liberty

Casey Rae-Hunter is guest blogging this month.

Part of the great debate that has come to characterize current assignments within the autism spectrum has centered on the concept of neurodiverity, which is, to my understanding, an umbrella term that connotes a desire to respect the neurological integrity of individuals. However, it has come to mean more to some with Asperger's Syndrome — particularly those adult "aspies" whose self-definition and place in the world may be hard won, to say the least.

It is somewhat difficult to have a cogent argument about neurodiversity at this stage in history, due to the relative newness of the Asperger's diagnosis. The sociological impact of having an entire generation of adults coming to grips with the existence of an autistic spectrum (and their place within it) can not be overstated. These are early days for aspie advocacy, so it's to be expected that some within this community, having suffered a broad array of indignities, would want to assert themselves through what they see as favorable self-categorization. To others, however, it may be interpreted as elitism.

Many adults with Asperger's (such as myself) did not have the benefit of social or scholastic acceptance of their differences. My own burdens were lightened considerably by my eventual AS diagnosis, but I'm sure for some this is not the case. Keep in mind that Asperger's is an autistic spectrum disorder — if you've met one aspie. . . you've met one aspie. I've heard some real horror stories of tragic childhoods, miserable school experiences and failed relationships, so I understand why some folks with AS may feel a certain degree of embitterment towards the neurotypical world. And it's definitely easy to retreat into a fantasy where you're the "superior" and everyone else just doesn't "get it."

Perhaps an analogy can be drawn to the feminist movement of the early 1960s. Having endured years of societal repression — if not outright abuse — at the hands of a patriarchal status quo, was it any wonder that some self-identifying feminists pushed the envelope of diplomatic conversation with larger society? My opinion is that some in the AS community are having their "I am Aspie, hear me roar" moment.

Well intentioned as such advocacy may be, it seems unfair to champion "neurodiversity" when there are people with, ahem, "lower functioning" autism who struggle greatly with their neurological lot. Families of autistic individuals may actually prefer a "cure" to this condition, as it's preferable to a lifetime of social stigma, behavioral outbursts and isolation. From that perspective, "fixing autism" looks pretty compassionate.

For those on the Asperger's side of the spectrum, the idea that aspies should be "cured" — likely through medical, societal or familial coercion — is as offensive as it gets. As we piece together the historic record of autism, it's clear that a shocking number of the most influential minds of the last several centuries may indeed have had Asperger's Syndrome: Nikola Tesla, Albert Einsten, Andy Warhol, Mozart. . . the speculative list goes on and on. If you'd suffered a lifetime of mistreatment by peers and ostracizing in romance or the workplace, wouldn't you want to self-identify with such titans of mentation? And who's to say that the increase in diagnosed Asperger's isn't just due to better clinical testing? Perhaps it's an evolutionary advantage — wouldn't our digital era favor adaptive traits that reward certain kinds of functioning? Ever wonder why there's so many aspie kids in Silicon Valley? Born to code, indeed.

On the other hand, this could all be a scientific canard.

It's probably better and more helpful to examine the meaning of cognitive liberty — which is to say, the right to psychological self-determination, based on robust informational resources and stratified by some level of societal tolerance. Before you say, "hey, that sounds like neurodiversity," consider my handy Principles of Cognitive Liberty:

1. Cognitive liberty is the basic right of an individual to pursue beneficial psychological trajectories. If the individual is unable to make these choices themselves, than it is the right of their closest family members to make them, provided they are not coerced by the medical establishment or prevailing social strata.

2. Cognitive liberty recognizes that information and education are key to making informed choices. In the absence of such information, cognitive libertarians will advocate for the fullest range of data in when considering treatment options or lifestyle planning.

3. Cognitive liberty recognizes the range of psychological profiles in both the neurotypical world and otherwise. Until and unless an individual's psychology can be determined as infringing on another individual's cognitive liberty, they are free to pursue or not pursue strategies for conventional adaptation or any other panacea — actual or postulatory.

What do you think about neurodiversity vs. cognitive liberty? How practical is either?

Casey Rae-Hunter is a writer, editor, musician, producer and self-proclaimed "lover of fine food and drink." He is the Communications Director of the Future of Music Coalition — a Washington, DC think tank that identifies, examines, interprets and translates issues at the intersection of music, law, technology and policy. He is also the founder and CEO of the Contrarian Media Group, which publishes The Contrarian and Autistic in the District — the latter a blog about Asperger's Syndrome.