September 30, 2010

Dedication to healthy foods considered an eating disorder

It almost sounds like the headline from an Onion article.

But back in August of 2009 the Guardian published a piece about how a fixation with healthy eating can be a sign of a serious psychological disorder. Called orthorexia nervosa, this so-called 'condition' was first diagnosed by Californian Steven Bratmanin in 1997 and is described as a "fixation on righteous eating." According to Bratmanin,
Orthorexics commonly have rigid rules around eating. Refusing to touch sugar, salt, caffeine, alcohol, wheat, gluten, yeast, soya, corn and dairy foods is just the start of their diet restrictions. Any foods that have come into contact with pesticides, herbicides or contain artificial additives are also out.
This obsession about which foods are "good" and which are "bad" means orthorexics can end up malnourished, claim the researchers, but at the same time be overweight or look normal. They are solely concerned with the quality of the food they put in their bodies, refining and restricting their diets according to their personal understanding of which foods are truly 'pure'.

The most susceptible are middle-class, well-educated people who regularly read about food "scares" and have the time and money to source what they believe to be purer alternatives.

I could go on but I'm going to stop right there; you get the picture.

Wow, I'm flabbergasted by this. While I admit it possible that a very small minority of health conscious people may actually be starving themselves on account of food paranoia, I have to think it's exceptionally rare. But according to this article and the researchers cited, orthorexia nervosa is a pervasive problem. In fact, there is a quote in the article from Deanne Jade, founder of the National Centre for Eating Disorders, who said, "There is a fine line between people who think they are taking care of themselves by manipulating their diet and those who have orthorexia. I see people around me who have no idea they have this disorder. I see it in my practice and I see it among my friends and colleagues."

Okay, so there's an abundance of well-educated, informed, middle-class health nuts.

And their dedication to eating healthily is now considered an eating disorder.

Specifically, those people who have eliminated such things as sugar, salt, caffeine, alcohol, wheat, gluten, yeast, soya, corn and dairy foods from their diets—not to mention pesticides, herbicides, and artificial additives.

These people have an eating disorder?

I hardly think so. These people are my heros for goodness sake. While I can understand why some people might consider them obsessives, I think of them as focused and disciplined. Eliminating those particular foods along with those extraneous toxins should be considered a good thing.

But therein lies the problem. These researchers, some of whom should know better (particularly the dietitians), are much like society in general: completely ignorant of what constitutes a healthy diet. By consequence, any deviation from the status quo—in this case an apparent radically restrictive diet—is considered not just deviant behavior, but something that's actually pathological in nature.

Truth is, the vast majority of "food" out there is stuff we shouldn't be eating in the first place; the core of the modern grocery store is a nothing more than a crap dispenser. By consequence, the world's eating habits are insane. But some people are getting wise to it, adopting such diets as Paleo, Zone, and others. Yes, these diets can be quite restrictive in the types and quantities of foods involved, but that's the reality of healthy (and dare I say ethical) eating.

As a result, for those unaccustomed or unfamiliar with what a truly healthy diet looks like, it may look rather spartan. If not completely bonkers.

The food industry is partly to blame. Hyper-processed and fast foods laden in sugar and salt are a staple of many diets. Ad campaigns fool consumers into thinking they're eating healthily. Parents are regularly deceived into thinking that a bowl of super-sweetened cereal is an integral part of their children's well-balanced diet—and just because it has a bit of fibre in it.

The government is also partly responsible with their ridiculously inaccurate food pyramid. This is a particularly nefarious and longstanding turd of misinformation (or deliberate disinformation?) that informs the food industry and a myriad of other institutions about what and how much they're supposed to prepare and serve to the public.

Lastly, the general population is also to blame. Like the cigarette smoker, most people knowingly engage in habits that are bad for them, while many others insist on remaining ignorant.

So, here in the developed world where there are pandemics of diabetes, obesity, heart disease, metabolic syndrome and many, many other lifestyle related diseases, we are now being told that a dedication to prevention is a psychological disorder. What foolishness. This is irresponsible to the point of negligence.

The phrase 'my body is a temple' comes to mind. For many of us, our ongoing efforts to keep our minds and bodies healthy is an integral part of our daily lives. We know that proper habits will impact on our health both in the short and long term. By making careful food choices now and having the discipline to avoid unhealthy eating, we stand a much better chance of extending our healthy life-span and quality of life. There is nothing wrong with that.

In fact, if only more people had this so-called 'orthorexia nervosa' we'd all be in a much better place.

Sebastian Seung @ TED: I am my connectome

Neuroscientist Sebastian Seung recently gave an extremely insightful and informative TED talk called, "I am my connectome." I highly recommend this as it touches upon a number of timely subjects, including the Human Connectome Project, the important work of Harvard neuroscientist Kenneth Hayworth, cryonics, and (peripherally) whole brain emulation.

September 27, 2010

Toth-Fejel: The politics and ethics of the weather machine

A tiny portion of a Hall Weather Machine
at 90,000 ft. This density may be
able to ameliorate global warming/cooling,
but would not be able to control weather.
A number of years ago, nanotechnology theorist J. Storrs Hall concieved of what is now called the Hall Weather Machine. It's exactly what it sounds like, an advanced system for controlling the weather:
The Hall Weather Machine is a thin global cloud consisting of small transparent balloons that can be thought of as a programmable and reversible greenhouse gas because it shades or reflects the amount of sunlight that hits the upper stratosphere. These balloons are each between a millimeter and a centimeter in diameter, made of a few-nanometer thick diamondoid membrane. Each balloon is filled with hydrogen to enable it to float at an altitude of 60,000 to 100,000 feet, high above the clouds. It is bisected by an adjustable sheet, and also includes solar cells, a small computer, a GPS receiver to keep track of its location, and an actuator to occasionally (and relatively slowly) move the bisecting membrane between vertical and horizontal orientations. Just like with a regular high-altitude balloon, the heavier control and energy storage systems would be on the bottom of the balloon to automatically set the vertical axis without requiring any energy. The balloon would also have a water vapor/hydrogen generator system for altitude control, giving it the same directional navigation properties that an ordinary hot-air balloon has when it changes altitudes to take advantage of different wind directions at different altitudes.
What's particularly impressive about the weather machine is that controlling a tenth of one percent of solar radiation is enough to force global climate in any direction we want. One percent is enough to change regional climate, and ten percent is enough for serious weather control.

The implications to remedial ecology, geoengineering, and technogaianism in general are profound, to say the least.

But as research engineer, and friend to the transhumanists, Tihamer Toth-Fejel notes in his article, "The Politics and Ethics of the Hall Weather Machine," managing the social and environmental implications of such a control system could prove to be tricky, if not completely untenable.

The weather machine could prove to be a disaster, either through misuse, abuse, or just plain ignorance.

For example, the global coordination of the reflective weather machine would allow for the bouncing of concentrated solar energy around the globe, making it possible to set cities on fire—the type of fire caused by dropping a nuclear bomb per second for as long as desired. As Toth-Fejel notes, "the potential for abuse is rather large." The temptation to weaponize such a device may be overwhelming. The whole project could start various arms races, including efforts to bring the entire system down.

In the article, Toth-Fejel considers a number of other scenarios and possible implications, both good and bad. Having a weather machine in place introduces a slew of fascinating implications, ranging from the environmental to the political. Toth-Fejel offers no easy answers or trite solutions, and instead uses the article the raise awareness about this important possibility.

Read more.

Notaro: Why do we believe what we believe?

I love the intersection of psychology, rationality and memetics: "Why do we believe what we believe" by the IEET's Kris Notaro:
One can make the argument that people accept certain memes over others mainly because of what they are taught they will get out of believing them. If we apply instrumental conditioning (similar to classical conditioning) where we replace behavior with belief, the consequence would be, for example, that belief in god leads to eternal life in heaven. Or we can make the claim that “god experiences” lead to a warm feeling in the “heart and minds” of those who believe God is there for them. Therefore, they believe and act accordingly to what the meme’s reward is: eternal life and/or a good feeling.

In a conversation I had recently with a professor of psychology, we both agreed that a modification of the Critical Period concept is probably the primary reason why people in today’s culture believe in God. This would entail that the critical period for people to take on a belief in God would extend to adolescents and young adults. I personally started to question the existence of God in middle school where Prof. A. started to question her faith during high school. We are both agnostic/atheist primarily because of these experiences early in our brain development. We both accept modern paradigms of science over religion.

Neuroscience has shown that specific brain regions, synoptic bonds, and neurotransmitters influence people to believe one concept over another. Philosophy , sociology, and psychology demonstrate how people can take on certain beliefs because of critical periods of learning, choice (Doxastic Voluntarism), cooperation, and instrumental conditioning. Each of these pressures on the brain can lead to the propagation of concepts/memes. There are no definitive schemes which explain why be believe what we believe as a society, culture, and world. However I would argue that in the healthy brain that all these reasons are interconnected, from brain regions, to simplistic mythical stories.

Eucrio set to launch on October 1

Via Accelerating Future:

Eucrio will officially launch on Friday, October 1st.
From the website:
The Company
EUCRIO is an organization that specializes in providing state-of-the-art standby, stabilization, and transport procedures for cryonicists in the European Union. EUCRIO is pleased to assist members of the three main cryonics storage provider organizations.
The People
EUCRIO employs a wide variety of professionals: including physicians, perfusionists, emergency medical technicians, engineers and scientists, throughout the European Union. EUCRIO has staff members ready to intervene across the European Union and all are ready to respond to clients at all times (24 hours a day, 7 days per week).

Stuxnet worm allows hackers to control industrial machinery

Well, it finally happened: A worm has been developed that can break into computers which control machinery at the heart of industry. Such a security breach could allow attackers to assume control of critical systems like pumps, motors, alarms and valves in an industrial plant. Worse, safety systems could be switched off at a nuclear power plant; fresh water contaminated with effluent at a sewage treatment plant, or the valves in an oil pipeline opened, contaminating the land or sea.

The worm is called Stuxnet and it's about 600-kilobytes in size. It was professionally written, an indication that a nation-state or organized crime outfit is likely behind it.

This worm will prove particularly problematic for legacy systems, but it's also a wake-up call for new distributed systems such as smart grids. Security will have to be embedded in the architecture right from the start to avoid such vulnerabilities.


September 26, 2010

Mazlan Othman: United Nations ambassador to extraterrestrials

ADDENDUM: This story is bullshit. Dammit, I had I feeling....

Breaking news in the fast-paced world of exopolitics: The United Nations has appointed an obscure Malaysian astrophysicist, Mazlan Othman, to act as Earth's primary-point-of-contact for visiting extraterrestrials. Othman will head the UN's little-known Office for Outer Space Affairs (Unoosa).

The recent discovery of hundreds of planets around other stars has made the detection of alien life more likely than ever before; the UN feels that it must be ready to coordinate humanity’s response to any first contact. During a recent talk, Othman noted, "The continued search for extraterrestrial communication, by several entities, sustains the hope that some day humankind will receive signals from extraterrestrials. When we do, we should have in place a coordinated response that takes into account all the sensitivities related to the subject. The UN is a ready-made mechanism for such coordination."

Professor Richard Crowther, an expert in space law and governance at the UK Space Agency and who leads British delegations to the UN on such matters, noted that Othman is now the nearest thing we have to a 'take me to your leader' person.

In my opinion, the first order of business for Othman should be a screening of Mars Attacks to learn what not to do during first contact:


Boyden: Helping brains and machines work together

Credit: Technology Review
Neuroscientist Edward Boyden, in his article, "Brain Coprocessors," says we need to develop operating systems to help brains and machines work together.

Boyden notes that over the past 20 years there has been a slew of technologies that have enabled the observation or perturbation of information in the brain.

Take functional MRI, for example, which measures blood flow changes associated with brain activity. FMRI technology is being explored for purposes as diverse as lie detection, prediction of human decision making, and assessment of language recovery after stroke.

And implanted electrical stimulators, which enable control of neural circuit activity, are borne by hundreds of thousands of people to treat conditions such as deafness, Parkinson's disease, and obsessive-compulsive disorder. In addition, new methods, such as the use of light to activate or silence specific neurons in the brain, are being widely utilized by researchers to reveal insights into how to control neural circuits to achieve therapeutically useful changes in brain dynamics. "We are entering a neurotechnology renaissance," says Boyden, "in which the toolbox for understanding the brain and engineering its functions is expanding in both scope and power at an unprecedented rate."

He continues:
This toolbox has grown to the point where the strategic utilization of multiple neurotechnologies in conjunction with one another, as a system, may yield fundamental new capabilities, both scientific and clinical, beyond what they can offer alone. For example, consider a system that reads out activity from a brain circuit, computes a strategy for controlling the circuit so it enters a desired state or performs a specific computation, and then delivers information into the brain to achieve this control strategy. Such a system would enable brain computations to be guided by predefined goals set by the patient or clinician, or adaptively steered in response to the circumstances of the patient's environment or the instantaneous state of the patient's brain.

Some examples of this kind of "brain coprocessor" technology are under active development, such as systems that perturb the epileptic brain when a seizure is electrically observed, and prosthetics for amputees that record nerves to control artificial limbs and stimulate nerves to provide sensory feedback. Looking down the line, such system architectures might be capable of very advanced functions--providing just-in-time information to the brain of a patient with dementia to augment cognition, or sculpting the risk-taking profile of an addiction patient in the presence of stimuli that prompt cravings.
Looking ahead to the future, Boyden admits that we'll need to be careful:
Of course, giving machines the authority to serve as proactive human coprocessors, and allowing them to capture our attention with their computed priorities, has to be considered carefully, as anyone who has lost hours due to interruption by a slew of social-network updates or search-engine alerts can attest. How can we give the human brain access to increasingly proactive coprocessing technologies without losing sight of our overarching goals? One idea is to develop and deploy metrics that allow us to evaluate the IQ of a human plus a coprocessor, working together--evaluating the performance of collaborating natural and artificial intelligences in a broad battery of problem-solving contexts. After all, humans with Internet-based brain coprocessors (e.g., laptops running Web browsers) may be more distractible if the goals include long, focused writing tasks, but they may be better at synthesizing data broadly from disparate sources; a given brain coprocessor configuration may be good for some problems but bad for others. Thinking of emerging computational technologies as brain coprocessors forces us to think about them in terms of the impacts they have on the brain, positive and negative, and importantly provides a framework for thoughtfully engineering their direct, as well as their emergent, effects.

September 25, 2010

Searching for Kardashev III civilizations

Fascinating article by Paul Gilster over at Centauri Dreams: Interstellar Archaeology on the Galactic Scale. In the article, Gilster, discusses the work of Richard Carrigan of Fermilab and his recent paper, "Starry Messages: Searching for Signatures of Interstellar Archaeology."

Carrigan argues that we should broaden SETI's scope to include the archeological remnants of Kardashev III civilizations, namely those civilizations who successfully tapped into their Galaxy's entire energy output. At first blush, one would assume that a K3 galaxy would be immediately obvious, with every one if its stars enclosed in a Dysonian structure of some sort. But Carrigan makes the case that this might not be the case:
…what would happen for a civilization on its way to becoming a type III civilization, a type II.5 civilization so to say? If it was busily turning stars into Dyson spheres the civilization could create a “Fermi bubble” or void in the visible light from a patch of the galaxy with a corresponding upturn in the emission of infrared light. This bubble would grow following the lines of a suggestion attributed to Fermi… that patient space travelers moving at 1/1000 to 1/100 of the speed of light could span a galaxy in one to ten million years. Here “Fermi bubble” is used rather than “Fermi void”, in part because the latter is also a term in solid state physics and also because such a region would only be a visible light void, not a matter void.
As Gilster notes, this is long-term thinking in the richest sense; a patient, long-lived civilization could envelop a galaxy on a time-scale comparable to or shorter than the rotation period of the galaxy (considerably >~250 million years).

Civs who are busy turning stars into Dyson spheres should leave vast Fermi 'bubbles' whose infrared signature would flag their existence. But as Carrigon notes, detection might still elude us.

For example, we see M51, the Whirlpool galaxy, face-on at a distance of 30 million light years. We can say with some confidence that we see no unexplained voids larger than about five percent of M51's area, but any void features below this level would be hard to identify because of spiral galaxy structure. Elliptical galaxies might be better places to look for Fermi bubbles, because they display little structure, and potential voids should be far more pronounced.

And then there's the difficulty in separating artificial structure from natural phenomena where the tendency is to defer to the latter.

Gilster concludes:
I come back around to the premise behind interstellar archaeology, that unlike conventional SETI it does not require a civilization to have any intention of contacting us. There are numerous ways to proceed, involving the kind of Dyson sphere search Carrigan has himself conducted within our own galaxy, or looking at planetary atmospheres in hopes of finding not only biosignatures but the markers of an advanced industrial or post-industrial culture. As we continue the SETI hunt, keeping in mind how planetary change or deliberate decisions to expand into the galaxy could leave visible traces allows us to hunt for things advanced intelligence might do.

How many civilizations in our galaxy, for example, have already faced the end of their main sequence star’s lifetime? If the number is high, it may be that we can find evidence of their response in the form of planetary or stellar engineering, making stars of this description interesting targets for future searches. In any case, our model of SETI is changing as not only our technologies but our assumptions become more sophisticated, leaving us to ponder a universe in which the need for expansion or simple survival may have left its own detectable history.

September 21, 2010

Creatine improves working memory and general intelligence

This study goes back to 2003, but it's good to know, particularly if you're vegetarian or vegan:
Research undertaken by scientists at the University of Sydney and Macquarie University in Australia has shown that taking creatine, a compound found in muscle tissue, as a dietary supplement can give a significant boost to both working memory and general intelligence. The work, to be published in a forthcoming Proceedings B, a learned journal published by the Royal Society, monitored the effect of creatine supplementation on 45 young adult vegetarian subjects in a double-blind, placebo-controlled experiment.

Blackmore: I no longer believe religion is a virus of the mind

Susan Blackmore, after attending an "Explaining Religion" conference, now believes that the idea of religious belief as a virus has had its day:
...[R]eligious memes are adaptive rather than viral from the point of view of human genes, but could they still be viral from our individual or societal point of view? Apparently not, given data suggesting that religious people are happier and possibly even healthier than secularists. And at the conference, Ryan McKay presented experimental data showing that religious people can be more generous, cheat less and co-operate more in games such as the prisoner's dilemma, and that priming with religious concepts and belief in a "supernatural watcher" increase the effects.

So it seems I was wrong and the idea of religions as "viruses of the mind" may have had its day. Religions still provide a superb example of memeplexes at work, with different religions using their horrible threats, promises and tricks to out-compete other religions, and popular versions of religions outperforming the more subtle teachings of the mystical traditions. But unless we twist the concept of a "virus" to include something helpful and adaptive to its host as well as something harmful, it simply does not apply. Bacteria can be helpful as well as harmful; they can be symbiotic as well as parasitic, but somehow the phrase "bacterium of the mind" or "symbiont of the mind" doesn't have quite the same ring.
Hmmm, not sure how I feel about this. I don't understand Blackmore's insistence on having to associate viruses with exclusively negative impacts on the host. Viruses don't care what impact it has on the host so long as it has created the conditions for successful transmission and replication. Sure, viruses are nasty most of the time, but sometimes they can be neutral and even helpful; viruses have become quite useful in gene therepy, for example, allowing researchers to insert and remove genetic material from eukaryotic cells much easier (and with a higher degree of success) than ever before.

Now, all this said, I'm completely open to new ways of describing and analogizing the replicative strategies of memes. I would have no problem talking about the "bacterial" spread of memes; if the analogy accurately describes what's happening, then we should feel free to use it. I don't find this awkward, I find it exciting!

Oh, and full props to Blackmore for having the courage to claim she was wrong about something. While I don't necessarily believe she was wrong, I have great respect for her decision to come out and publicly challenge her own views.

September 20, 2010

Human Connectome Project to start mapping brain's connections

The National Institutes of Health recently awarded grants totaling $40 million to map the human brain's connections in high resolution. It's hoped that better understanding of such connectivity will result in improved diagnosis and treatment of brain disorders.

To do so, state-of-the-art scanners will be employed to reveal the brain's intricate circuitry in high resolution.

The grants are the first awarded under the Human Connectome Project and they will support two collaborating research consortia. The first will be led by researchers at Washington University, St. Louis, and the University of Minnesota, Twin Cities, while the other will be led by investigators at Massachusetts General Hospital (MGH)/Harvard University, Boston, and the University of California Los Angeles (UCLA).

"We're planning a concerted attack on one of the great scientific challenges of the 21st Century," said Washington University's Dr. David Van Essen, Ph.D., who co-leads one of the groups with Minnesota's Kamil Ugurbil, Ph.D. "The Human Connectome Project will have transformative impact, paving the way toward a detailed understanding of how our brain circuitry changes as we age and how it differs in psychiatric and neurologic illness."

The Connectome projects are being funded by 16 components of NIH under its Blueprint for Neuroscience Research.

This highly coordinated effort will use state-of-the-art imaging instruments, analysis tools and informatics technologies — and all of the resulting data will be freely shared with the research community. Individual variability in brain connections underlies the diversity of human cognition, perception and motor skills, so understanding these networks promises advances in brain health.

One of the teams will map the connectomes in each of 1,200 healthy adults — twin pairs and their siblings from 300 families. The maps will show the anatomical and functional connections between parts of the brain for each individual, and will be related to behavioral test data. Comparing the connectomes and genetic data of genetically identical twins with fraternal twins will reveal the relative contributions of genes and environment in shaping brain circuitry and pinpoint relevant genetic variation. The maps will also shed light on how brain networks are organized.

In tooling up for the screening, the researchers will optimize magnetic resonance imaging (MRI) scanners to capture the brain’s anatomical wiring and its activity, both when participants are at rest and when challenged by tasks. All participants will undergo such structural and functional scans at Washington University. For these, researchers will use a customized MRI scanner with a magnetic field of 3 Tesla. This Connectome Scanner will incorporate new imaging approaches developed by consortium scientists at Minnesota and Advanced MRI Technologies and will provide ten-fold faster imaging times and better spatial resolution.

Creating these maps requires sophisticated statistical and visual informatics approaches; understanding the similarities and differences in these maps among sub-populations will improve our understanding of human brain in health and disease.


September 19, 2010

One step closer to technologically assisted telepathy

University of Utah scientists have successfully decoded words from brain signals, bringing us one step closer to realizing technologically assisted telepathy, or techlepathy.

In an early step toward letting severely paralyzed people speak with their thoughts, researchers translated brain signals into words using two grids of 16 microelectrodes implanted beneath the skull but atop the brain:
Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals - such as those generated when the man said the words "yes" and "no" - they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time - better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person's thoughts into words spoken by a computer.
The researchers discovered that each spoken word produced varying brain signals, and thus the pattern of electrodes that most accurately identified each word varied from word to word. This finding supports the theory that closely spaced microelectrodes can capture signals from single, column-shaped processing units of neurons in the brain.

All this said, the process is far from perfect. The researchers were 85% accurate when distinguishing brain signals for one word from those for another when they used signals recorded from the facial motor cortex. They were 76% accurate when using signals from Wernicke's area (and combining data didn't help). The scientists were able to record 90% accuracy when they selected the five microelectrodes on each 16-electrode grid that were most accurate in decoding brain signals from the facial motor cortex. But in the more difficult test of distinguishing brain signals for one word from signals for the other nine words, the researchers initially were accurate only 28% of the time. However, when they focused on signals from the five most accurate electrodes, they identified the correct word 48% of the time.

So, there's lots of work to be done, but the proof of concept appears to be (mostly) there.

This research is being done to help those with locked-in syndrome, but once it gets developed there will be broader implications and applications. A more sophisticated and refined version of this technology, and in conjunction with other neural interfacing technologies, could result in the development of technologically assisted telepathy.

September 17, 2010

Turchin: SETI at risk of downloading a trojan horse

Russian physicist Alexey Turchin contends that passive SETI may be just as dangerous—if not more so—than active SETI:

I was fortune enough to be able to talk to Turchin at the Humanity+ Summit at Harvard earlier this year where he clarified his argument to me.

Turchin worries that humanity may be tricked by an out-of-control script that is propagating throughout the Galaxy. This script, which uses a pre-Singularity civilization as its vector, fools its hosts with a lure of some kind (e.g. immortality, access to the Galactic Internet, etc.) who in turn unwittingly build a device that produces a malign extraterrestrial artificial intelligence (ETAI). This ETAI then takes over all the resources of the planet so that it can re-broadcast itself into the cosmos in search of the next victim.

This concept is similar to Carl Sagan's interstellar transportation machine in Contact, except that it would work to destroy our civilization rather than see it move forward.

It's worth noting that this ETAI and its script may be a mutation of some sort, where no civilization was actually responsible for designing the damn thing. It's just a successful replicative schema that's following Darwinian principles.

It's also worth noting that I warned of this back in 2004.

Moving forward, Turchin suggests we raise awareness of the potential problem, change the guidelines for SETI research and consider the prohibition of SETI before we get our own AI.

Turchin's idea sounds ludicrous, but it's one of those crazy things that causes a nervous laugh. I think Turchin's idea needs to be discussed as there may be some merit to it. We need to be careful.

Andrew Revkin: Extreme weather in a warming world

"The need for developing resilience in the face of worst-case weather is glaring and urgent. With or without shifts propelled by the buildup of human-generated greenhouse gases, as populations continue rising in some of the world’s worst climatic “hot zones” — sub-Saharan Africa being the prime example — the exposure to risks from drought and heat will continue to climb, as well. In poor places, the risk is exacerbated by persistent poverty, dysfunctional government and a glaring lack of capacity to track climate conditions and design agricultural systems and water supplies around them." -- Andrew Revkin, "Extreme weather in a warming world"

Sunspots on the decline

Scientists studying sunspots for the past two decades have concluded that the magnetic field that triggers their formation has been steadily declining. If the current trend continues, by 2016 the sun's face may become spotless and remain that way for decades—a phenomenon that in the 17th century coincided with a prolonged period of cooling on Earth.

The last solar minimum should have ended last year, but something unexpected has been happening. Although solar minimums normally last about 16 months, the current one has stretched over 26 months—the longest in a century. A reason, according to one source may be that the magnetic field strength of sunspots appears to be waning.

Tracking and predicting solar minimums and maximums is growing in importance given the potential for devastating solar flares.

Peter Singer: Animal personhood by 2020

One of the many things I like about Peter Singer is his patient and pragmatic approach to animal rights. He realizes that major change doesn't happen overnight. So, instead of just hoping for a miraculous shift in public perception, Singer has methodically worked to see his vision for animal welfare and rights come to fruition. And in the grander scheme of things, that doesn't mean he can't dream big.

Writing in the Forbes article, 2020: Animal Advocates Surpass NRA In Political Influence, he speculates about what things might be like in ten years:
Perhaps even more significant has been the change in the legal status of animals, and the way we think about them. That movement began in Europe. Already in 2009 the European Union recognized it was wrong to think of animals merely as "property" and instead gave them a legal status that recognized them for what they are: "sentient beings." That opened the floodgates for court challenges against the confinement and mistreatment of animals.

In the United States the argument focused on our nearest relatives: chimpanzees, bonobos, gorillas and orangutans. Supported by expert evidence that great apes are self-aware, rational beings with close personal relationships and rich emotional lives, courts began to recognize them as having rights, and appointed guardians to protect those rights, in much the same way as they appoint guardians to protect the rights of people with intellectual disabilities. In 2019 the U.S. Supreme court effectively ended research on great apes by requiring consent to each experiment from a guardian concerned with the welfare of each experimental subject. The court left open for further argument the extension of this principle to other animals.
Fingers are crossed.


Google Health gets enhanced

Google Health, which was launched nearly two years ago, provides an online central repository for users to store and share their medical data with whomever they want. Needless to say, Google takes great pains to ensure privacy and security.

Google recently revved up the service, adding a number of features primarily based on user feedback. Google Health has added tools that will help users act on all their health and wellness concerns, engage in easier data tracking, increase personalization and set and track progress toward health goals.

Google has developed an easier-to-use dashboard that brings together more of a user's health and wellness information in one place and makes it easier for them to organize and act on that information. Participants can now better track wellness, and wellness goals, including the recording of daily experiences.

For example, users might want to set a goal of walking more each day or to lower cholesterol over time. With their new design, participants can easily monitor their path to success with a visual graph that shows their progress towards their personalized goal. Users can even create custom trackers for other things that they want to monitor like daily sleep, exercise, pregnancy or even how many cups of coffee they drink a day.

Google has also integrated with several new partners to make it easier for participants to collect the data required to track their progress, including Fitbit, maker of a wearable device that captures health and wellness data such as steps taken, calories burned and sleep quality; and CardioTrainer, one of the top mobile apps for tracking fitness activity and weight loss. In the two weeks since CardioTrainer’s integration went live, CardioTrainer developer WorkSmart Labs reported that users have already uploaded more than 150,000 workouts to Google Health, where they can more easily view, track and set goals around their workouts and monitor them along with other health and wellness information.

Besides tracking progress toward health goals, the new design also gives users the ability to take notes or keep a journal on their progress for each health condition or medication they’re taking. The new design also delivers information that is more personalized to a particular set of medical conditions or specific medications. Participants can access improved content links for each medical condition, medication or lab result they have in their Google Health profile.

In addition, the Google Health profile is now easier to read and customize, with the ability to hide past items or sections that are outdated or no longer relevant. All of this helps users keep their dashboard up-to-date with current, relevant information, while still letting them maintain a complete health history.

Kenneth Minogue on how democracy erodes the moral life

Check out Kenneth Minogue's latest book, The Servile Mind: How Democracy Erodes the Moral Life. There's some interesting food for thought here:
One of the grim comedies of the twentieth century was the fate of miserable victims of communist regimes who climbed walls, swam rivers, dodged bullets, and found other desperate ways to achieve liberty in the West at the same time as intellectuals in the West sentimentally proclaimed that these very regimes were the wave of the future. A similar tragicomedy is being played out in our century: as the victims of despotism and backwardness from third world nations pour into Western states, the same ivory tower intellectuals assert that Western life is a nightmare of inequality and oppression.

In The Servile Mind: How Democracy Erodes the Moral Life, Kenneth Minogue explores the intelligentsia’s love affair with social perfection and reveals how that idealistic dream is destroying exactly what has made the inventive Western world irresistible to the peoples of foreign lands. The Servile Mind looks at how Western morality has evolved into mere “politico-moral” posturing about admired ethical causes—from solving world poverty and creating peace to curing climate change. Today, merely making the correct noises and parading one’s essential decency by having the correct opinions has become a substitute for individual moral actions.

Instead, Minogue posits, we ask that our government carry the burden of solving our social—and especially moral—problems for us. The sad and frightening irony is that as we allow the state to determine our moral order and inner convictions, the more we need to be told how to behave and what to think.
Overstated, methinks—but his point is well taken. There's something really wrong with the issues based politics that has come to characterize today's politicians, election campaigns and resultant administrations.

Be sure to check out this excerpt from the book.

September 14, 2010

Stross: Future Shock Now

Charlie Stross recently returned from the Australian Singularity Summit and had an epiphany: Alvin Toffler's vision of Future Shock is happening as predicted and we're caught right smack dab in the middle of it:
I don't propose to use this blog entry as a bully pulpit for bashing the intolerant religious...Rather, I'd just like to note that the past decade or so seems to have been marked by a worldwide upwelling of bigotry and intolerance. And it's not only the extremist fringes of every religious creed that are to blame here, although they're part of the picture (and no religion seems to be free of turbulent loons around the edges). We have extremist, eliminationist rhetoric in American political discourse, combined with a hair-raising outbreak of ethnophobia directed at muslims. We have France and Italy deporting Roma (illegally; they're EU citizens and have an absolute right of residence), in a move fuelled by a wave of xenophobia that bears unpleasant echoes of 1940-45. A wave of petty authoritarianism in the UK has led to the installation of all the well-oiled machinery of a police state — now in disarray due to an epochal political upset, but deeply alarming to anyone concerned for civil liberties in the past decade. Australia had its great firewall debate. Russia's government is increasingly authoritarian, harking back to the Soviet era in methods and goals (now with less revolutionary ideology).
Stross continues,
The term Future Shock was coined by Alvin and Heidi Toffler in the 1960s to describe a syndrome brought about by the experience of "too much change in too short a period of time". Per Wikipedia (my copy of Future Shock is buried in a heap of books in the room next door) "Toffler argues that society is undergoing an enormous structural change, a revolution from an industrial society to a 'super-industrial society'. This change will overwhelm people, the accelerated rate of technological and social change leaving them disconnected and suffering from 'shattering stress and disorientation' — future shocked. Toffler stated that the majority of social problems were symptoms of the future shock. In his discussion of the components of such shock, he also popularized the term information overload."

It's about forty years since "Future Shock" was published, and it seems to have withstood the test of time. More to the point, the Tofflers' predictions for how the symptoms would be manifest appear to be roughly on target. They predicted a growth of cults and religious fundamentalism; rejection of modernism: irrational authoritarianism: and widespread insecurity. They didn't nail the other great source of insecurity today, the hollowing-out of state infrastructure and externally imposed asset-stripping in the name of economic orthodoxy that Naomi Klein highlighted in The Shock Doctrine, but to the extent that Friedmanite disaster capitalism can be seen as a predatory corporate response to massive political and economic change, I'm inclined to put disaster capitalism down as being another facet of the same problem. (And it looks as if the UK and USA are finally on the receiving end of disaster capitalism at home, in the post-2008 banking crisis era.)

My working hypothesis to explain the 21st century is that the Tofflers underestimated how pervasive future shock would be. I think somewhere in the range from 15-30% of our fellow hairless primates are currently in the grip of future shock, to some degree. Symptoms include despair, anxiety, depression, disorientation, paranoia, and a desperate search for certainty in lives that are experiencing unpleasant and uninvited change. It's no surprise that anyone who can offer dogmatic absolute answers is popular, or that the paranoid style is again ascendant in American politics, or that religious certainty is more attractive to many than the nuanced complexities of scientific debate. Climate change is an exceptionally potent trigger for future shock insofar as it promises an unpleasant and unpredictable dose of upcoming instability in the years ahead; denial is an emotionally satisfying response to the threat, if not a sustainable one in the longer term.
I kinda half agree with Stross. There's a lot of truth to what he's saying. What he needs to be more specific about, however, is how different groups are reacting to future shock in different ways, and how that in turns sets off ancillary social stresses; not everyone is reacting to future shock per se.

For example, Islamic fundamentalists are clearly being set-off by future shock (what others might call cultural globalization, or Westernification, or imperialism, or whatever). The reaction to their reaction, particularly by Americans, is not directly caused by future shock. Instead, it's a kind of backlash to those who are future shocked, leading to a rise in populism and an insidious quasi-fascism. But any way you look at it, there's definitely turmoil in the world, and much of it caused by the rapid rate of technological development and spread—and the sociological changes it brings.

Be sure to read Stross's entire article, it's a good one.

Science considered

Caught some articles today about declining faith in science and how different individuals are more prone to reject or accept certain scientific discoveries:
  • Plenty of today’s scientific theories will one day be discredited. So should we be sceptical of science itself?: "There is no full-blown logical paradox here. If a claim is ambitious, people should indeed tread warily around it, even if it comes from scientists; it does not follow that they should be sceptical of the scientific method itself. But there is an awkward public-relations challenge for any champion of hard-nosed science. When scientists confront the deniers of evolution, or the devotees of homeopathic medicine, or people who believe that childhood vaccinations cause autism—all of whom are as demonstrably mistaken as anyone can be—they understandably fight shy of revealing just how riddled with error and misleading information the everyday business of science actually is. When you paint yourself as a defender of the truth, it helps to keep quiet about how often you are wrong." - Anthony Gottlieb, The Limits of Science.
  • Individuals with competing cultural values disagree about what most scientists believe: "We know from previous research that people with individualistic values, who have a strong attachment to commerce and industry, tend to be skeptical of claimed environmental risks, while people with egalitarian values, who resent economic inequality, tend to believe that commerce and industry harms the environment." - Dan Kahan, Why 'scientific consensus' fails to persuade.
  • In addition to this, it's turning out that more women than men accept climate change.

September 12, 2010

The Independent covers the Singularity Summit, transhumanism

New Independent article: Revenge of the nerds: Should we listen to futurists or are they leading us towards ‘nerdocalypse’?

The intro blurb is an eye-roller of epic proportions:
They're building robots, they're making us immortal, they're hanging out with Stevie Wonder and getting off on fruit-fly porn. These are the visionary thinkers who can make our future bright, and these are the ties that bind them. But are they leading us all towards 'nerdocalypse'?
Uh, yeah. Nerds. *sighs*

Okay, if you dare to read on,
Here, in a plush and spacious apartment not far from the Golden Gate Bridge, scientists, academics and futurists – bankrolled by the Silicon Valley dollar – are discussing what many among them believe to be an imminent and radical transformation of the human experience. This sea change, caused by monumental advances in technology, has a name: the singularity. It also has a dedicated and well-informed fanbase: the singularitarians.

The reception is in full swing. Next to the open bar, the professional rationalist is rubbing shoulders with the preeminent neurobiologist, and the scenario forecaster is exchanging ideas with the cutting-edge nanotechnologist, as notions once thought too outlandish to merit serious consideration – such as beyond-human intelligence, immortality and god-like omniscience – are reassessed in the cool light of possibility. Yesterday's tech-obsessed fantasist is today's credible expert. A new, dynamic, cross-discipline geek community is visibly taking shape, as the buzz of high-brow chatter fills the room like pipe tobacco in an early 20th-century Vienna coffee house.

Michael Vassar, summit host and president of the Singularity Institute for Artificial Intelligence (SIAI), reduces the future to two competing scenarios: "Either you and everyone you love are going to be killed by robots; or you are going to live forever." Some very clever people, he says with a hint of mischief and a disconcerting flash of clear-eyed sincerity, can make a strong case for each of those arguments, so it's in our best interests to pay attention.
Continue reading.

New 'static universe' theory challenges the Big Bang

A growing number of cosmologists are becoming increasingly dissatisfied (or is that frustrated?) with the Big Bang Theory. One such person is David F. Crawford who recently posited a static theory of the universe which he claims better explains the properties of the cosmos than the Big Bang and avoids the nagging problems of dark matter and dark energy.

From Technology Review:
The idea that the universe began in an event called the Big Bang some 13 billion years ago has a special place in science and in our society. We like the idea of a beginning.

And the evidence is persuasive. Distant galaxies all appear to be moving away from us at great speed, which is exactly what you'd expect if they were created in a Big Bang type event many billions of years ago. Such an event might also have left an echo, exactly like the one we can see as the cosmic microwave background radiation.

The Big Bang seems so elegant an explanation that we're prepared to overlook the one or two anomalies that don't quite fit, like the fact that distant galaxies aren't travelling fast enough to have moved so far since the Big Bang, a problem that inflation was invented to explain. Then there are the problems of dark matter and dark energy, which still defy explanation.

So a legitimate question, albeit an uncomfortable one, is whether there is an alternative hypothesis that also explains the observations. We looked at one here and today, David Crawford at the University of Sydney in Australia gives us another. He says all this can be explained just as well by a static universe in which spacetime is curved. He says this explains most of the major characteristics of our universe without the need for dark matter or dark energy. Neither is there any need for inflation in a static universe.
Continue reading.

Kottmeyer: Why have UFOs changed speed over the years?

There's an excellent article by Martin S. Kottmeyer in The Philosopher's Magazine titled, "Why have UFOs changed speed over the years?" In the essay, Kottmeyer argues that UFO sightings are very much a product of observation selection effects and a kind of 'sightings bias'. The nature of UFO sightings, he says, change over time and are impacted by everything from advances in technology to sci-fi movies. Kottmeyer is basically arguing that people are seeing what they expect to see, and by consequence, are working within a "search template":
Despite a considerable variety in the reports, the form of the objects was always consistent with a type of aircraft. Propellers were often seen, one witness even claiming it was larger than the rest of the plane. Jet pipes, pilot’s cockpits, glass domes, fins, legs, and antennae featured on some of the objects. Smoke, vapour hails, and rocket flames repeatedly marked their flights. A wide range of aerobatic stunts turn up among the reports: loop-the-loops, roll manoeuvres, banking, weaving, climbing, diving, tipping, circling, and swooping. Some “UFOs” buzzed cars, but unlike decades later, the car engines never died. It has been thought significant that animals sometimes reacted to the objects, yet a close reading suggests it wasn’t because of their spooky alien-ness; the saucers were doing barnstorming manoeuvres.

Notable by its absence is any indication of extraterrestrial technology: no lasers, heat rays, paralysis rays or gases, mind control rays, power rings, levitation of people or objects, denaturalisation, matter interpenetration, space-suited entities, robots, remote eyes, or even simple observation ports. Nobody was looking for aliens and nothing was seen to suggest any were there.
He continues,
Why did this shift from fast to slow take place? The simplest answer has to be the fading of memory. Arnold’s report lost its fascination as newer, better, shinier cases crowded it out for public attention. Cases like Socorro, Exeter, the Swamp Gas saucer of Dexter, Flynn, and The Interrupted Journey of Barney and Betty Hill captured people’s imaginations and became the models to which later experiences would be compared. The search template of what should be considered wondrous filtered out what seemed irrelevant. In 1947 people looked for speedy things and things that looked like discs, and ignored the slow stuff and the lights floating around at night. There was a heavy bias to misinterpreting flocks of birds.

Later, people searched for bright lights and slow, hovering objects and, as Allan Hendry showed, people had a bias towards misinterpreting stars, planets, and aeroplane lights.
He concludes:
Does this prove UFOs are unreal phantoms that blend in with their times? No. Strictly, it only proves that there is a cultural dimension in our assumptions about what constitutes the behaviour of a flying saucer. People do not report everything that is present in the sky but select only what is presumed to be interesting. What is interesting changes year to year, decade to decade, century to century. We’ve forgotten that Kenneth Arnold was interesting for reasons that no longer interest us. That, in itself, is interesting.

September 9, 2010

On the persistence of time dilation

Cool thought: Einstein's Special Theory of Relativity, which isn't so much a theory anymore for reasons I'll discuss later, tells us that increased speeds results in a slower clock rate for that object relative to a slower one (sorry, time isn't a fixed or constant thing across the Universe, get used to it). But Einstein didn't stop there. He also went on to describe his General Theory of Relativity in which he showed that gravity produces a similar time dilation effect; the heavier the gravity, the slower the clock.

So, as Einstein famously noted in his space-faring twin thought experiment, the returning twin, because he was moving faster relative to his Earth-bound sibling, will have aged less than his counterpart. Similarly, because of gravitational time dilation, a clock on Jupiter would run slower than a clock on Earth on account of its great size. I know it doesn't sound intuitive, but that's Relativity for you and why Einstein is considered such a genius for figuring this out.

Now, as messed as this sounds, this effect is becoming more perceptible to us, particularly as we travel faster and venture further into space. Gravitational time dilation has been experimentally measured using atomic clocks on airplanes and the effect is significant enough that the Global Positioning System's artificial satellites need to have their clocks corrected regularly. The International Space Station, because it is moving faster relative to Earth, and because it experiences less gravity, is subject to both effects; a faster speed means a slower clock, but less gravity means a faster clock! NASA's mathematicians must be having a blast trying to keep their clocks in synch with the ISS's.

And if you think that's complicated, we also have to deal with our robots on Mars where we need to account for the speed of Mars relative to Earth's and factor in the gravitational differences between the two planets. What blows my mind is that the Mars Rover is experiencing the passage of time at a slightly different rate than what we're experiencing on Earth.

Yikes. Problems like these remind me why I dropped out of high school math.

Telomerase-activating compound may help reverse aging

Researchers have discovered a telomerase-activating compound which could eventually be used to reverse aging in humans.

Specifically, a naturally derived compound known as TA-65 has been shown to activate the telomerase gene in humans. The researchers, a collaboration of scientists from Sierra Sciences, TA Sciences, Geron Corporation, PhysioAge, and the Spanish National Cancer Research Center, discovered that activating this gene could prevent the shortening of telomeres at the ends of chromosomes, thereby slowing or even stopping the cellular aging process.

While TA-65 is probably too weak to completely arrest the aging process, it is the first telomerase activator recognized as safe for human use.

"We are on the cusp of curing aging," said William Andrews, Ph.D., co-author of this study and President and CEO of Sierra Sciences, LLC. "TA-65 is going to go down in history as the first supplement you can take that doesn't merely extend your life a few years by improving your health, but actually affects the underlying mechanisms of aging. Better telomerase inducers will be developed in the coming years, but TA-65 is the first of a whole new family of telomerase-activating therapies that could eventually keep us young and healthy forever."

As excited as I am by this discovery, I believe Andrews's statement is considerably overstated. We are still quite a ways off from having interventions that will "keep us young and healthy forever," and it will unlikely be accomplished through the exclusive use of telomerase-activating therapies. Aging is a multi-faceted process that will inevitably require a cocktail of therapies. Moreover, as healthy life span is continually extended, new and unanticipated age-related diseases will crop up.

It's worth noting that, in addition to slowing the cellular aging process, the researchers hope that TA-65 may also help treat diseases which attack the immune system such as HIV/AIDS.

Press release.

September 6, 2010

NASA's warnings on the dangers of severe space storms

Back in June I blogged about the potential dangers arising from space storms that could spawn devastating solar flares. This is no joke, nor is it part of the laughable (but conveniently co-incidental) 2012 doomsday nonsense. There's actual science involved here; NASA issued a solar storm warning back in 2006 in which it predicted that the worst of it could come sometime between 2011 and 2012. Last year they slightly downgraded their warning, while extending their forecast to 2013—May 2013 to be exact, which sounds eerily specific.

According to NASA, we are currently in a solar maximum period. These cycles are capable of creating space storms—what are known as "Carrington Events," named after astronomer Richard Carrington who witnessed a particularly nasty solar flare back in 1859. The flare he documented resulted in electrified transmission cables, fires in telegraph offices, and Northern Lights so bright that people could read newspapers by their red and green Mexico.

If this is what happened in 1859, imagine what would happen today. Well, we're starting to have some idea—and the news is pretty bad.

A recent report by the National Academy of Sciences found that if a similar storm occurred today, it could cause $1 to 2 trillion in damages to society's high-tech infrastructure and require four to ten years for complete recovery. It could damage everything from emergency services’ systems, hospital equipment, banking systems and air traffic control devices, through to everyday items such as home computers, iPods and GPS's. Because of our heavy reliance on electronic devices, which are sensitive to magnetic energy, the storm could leave a multi-billion dollar damage bill and cataclysmic-scale problems for governments.

Worse than this, however, would be the potential length of blackouts. According to a Metatech Corporation study, an event like the 1921 geomagnetic storm would result in large-scale blackouts affecting more than 130 million people and would expose more than 350 transformers to the risk of permanent damage. It could take months—if not years—to put everybody back on the grid.

For more reading, I recommend the NASA report, "Severe Space Weather Events--Understanding Societal and Economic Impacts: A Workshop Report" (2008). Excerpt:
Modern society depends heavily on a variety of technologies that are susceptible to the extremes of space weather—severe disturbances of the upper atmosphere and of the near-Earth space environment that are driven by the magnetic activity of the Sun. Strong auroral currents can disrupt and damage modern electric power grids and may contribute to the corrosion of oil and gas pipelines. Magnetic storm-driven ionospheric density disturbances interfere with high-frequency (HF) radio communications and navigation signals from Global Positioning System (GPS) satellites, while polar cap absorption (PCA) events can degrade—and, during severe events, completely black out—HF communications along transpolar aviation routes, requiring aircraft flying these routes to be diverted to lower latitudes. Exposure of spacecraft to energetic particles during solar energetic particle events and radiation belt enhancements can cause temporary operational anomalies, damage critical electronics, degrade solar arrays, and blind optical systems such as imagers and star trackers.

The effects of space weather on modern technological systems are well documented in both the technical literature and popular accounts. Most often cited perhaps is the collapse within 90 seconds of northeastern Canada’s Hydro-Quebec power grid during the great geomagnetic storm of March 1989, which left millions of people without electricity for up to 9 hours. This event exemplifies the dramatic impact that extreme space weather can have on a technology upon which modern society in all of its manifold and interconnected activities and functions critically depends.

Nearly two decades have passed since the March 1989 event. During that time, awareness of the risks of extreme space weather has increased among the affected industries, mitigation strategies have been developed, new sources of data have become available (e.g., the upstream solar wind measurements from the Advanced Composition Explorer), new models of the space environment have been created, and a national space weather infrastructure has evolved to provide data, alerts, and forecasts to an increasing number of users.

Now, 20 years later and approaching a new interval of increased solar activity, how well equipped are we to manage the effects of space weather? Have recent technological developments made our critical technologies more or less vulnerable? How well do we understand the broader societal and economic impacts of extreme space weather events? Are our institutions prepared to cope with the effects of a “space weather Katrina,” a rare, but according to the historical record, not inconceivable eventuality?
Read more.

Hitchens: Domesticating religion an unceasing chore of civilization

Writing in Slate, Christopher Hitchens says the taming and domestication of religious faith is one of the unceasing chores of civilization. In the article, titled "Free Exercise of Religion? No, Thanks.", Hitchens asks himself: Am I in favor of the untrammeled "free exercise of religion"? Not surprisingly, his answer is no.

He tasks a number of religions to task for what he sees as moral inconsistencies and hypocrisies, everything from the Mormons through to Roman Catholicism. He writes:
The Church of Scientology, the Unification Church of Sun Myung Moon, and the Ku Klux Klan are all faith-based organizations and are all entitled to the protections of the First Amendment. But they are also all subject to a complex of statutes governing tax-exemption, fraud, racism, and violence, to the point where "free exercise" in the third case has—by means of federal law enforcement and stern public disapproval—been reduced to a vestige of its former self.
And concludes:
Reactions from even "moderate" Muslims to criticism are not uniformly reassuring. "Some of what people are saying in this mosque controversy is very similar to what German media was saying about Jews in the 1920s and 1930s," Imam Abdullah Antepli, Muslim chaplain at Duke University, told the New York Times. Yes, we all recall the Jewish suicide bombers of that period, as we recall the Jewish yells for holy war, the Jewish demands for the veiling of women and the stoning of homosexuals, and the Jewish burning of newspapers that published cartoons they did not like. What is needed from the supporters of this very confident faith is more self-criticism and less self-pity and self-righteousness.

Those who wish that there would be no mosques in America have already lost the argument: Globalization, no less than the promise of American liberty, mandates that the United States will have a Muslim population of some size. The only question, then, is what kind, or rather kinds, of Islam it will follow. There's an excellent chance of a healthy pluralist outcome, but it's very unlikely that this can happen unless, as with their predecessors on these shores, Muslims are compelled to abandon certain presumptions that are exclusive to themselves. The taming and domestication of religion is one of the unceasing chores of civilization. Those who pretend that we can skip this stage in the present case are deluding themselves and asking for trouble not just in the future but in the immediate present.
Read more.

Novae produce gamma-rays. Damn.

Bad news: Novae emit gamma-rays.

We've known for a long time that supernovae produce gamma-rays, but until now it was assumed that novae lacked the power to emit such high-energy radiation. This is bad because novae occur at much greater frequency than super- and hypernovae; we are therefore at a much greater risk of being wiped out by a blast of gamma-ray radiation than previously thought.

The Milky Way experiences about 30 to 60 novae per year, with a likely rate of about 40. Roughly 25 novae brighter than about magnitude 20 are discovered in the Andromeda Galaxy each year and smaller numbers are seen in other nearby galaxies.

Contrast that with supernovae which occur about five times every hundred years.

A nova event should not be confused with a supernova. It is a cataclysmic nuclear explosion caused by the accretion of hydrogen onto the surface of a white dwarf star, which ignites and starts nuclear fusion in a runaway manner. A supernova, on the other hand, is a stellar explosion that is more energetic than a nova. Supernovae are extremely luminous and cause a burst of radiation that can outshine an entire galaxy before fading from view over several weeks or months. During this short interval a supernova can radiate as much energy as the Sun is expected to emit over its entire life span. The explosion expels much or all of a star's material at a velocity of up to 30,000 km/s (10% of the speed of light), driving a shock wave into the surrounding interstellar medium.

Though not as powerful as a supernova, novae are still immensely energetic, emitting the equivalent of about 1,000 times the energy emitted by our Sun every year. And now we can add gamma-rays to its list of nasty excretions.

To say that a gamma-ray blast would be bad for us here on Earth would be a gross understatement. Combined with the effects of a cataclysmic stellar explosion, it is one of the most powerful forces in the Universe, able to sterilize massive swaths of the galaxy. Supernovae can shoot out directed beams of gamma-rays to a distance of 100 light years, while hypernovae disburse gamma ray bursts as far as 500 to 1,000 light years away.

As for novae, the explosion creates a hot, dense, expanding shell called a shock front, composed of high-speed particles, ionized gas and magnetic fields. These shock waves expand at 7 million miles per hour—or nearly 1% the speed of light. The magnetic fields trap particles within the shell and whip them up to tremendous energies. Before they can escape, the particles reach velocities near the speed of light. Scientists say that the gamma rays likely result when these accelerated particles smashed into the red giant's wind.

Previous to this discovery, it was known that the remnants of much more powerful supernova explosions can trap and accelerate particles like this, but no one suspected that the magnetic fields in novae were strong enough to do it as well. Supernova remnants endure for 100,000 years and produce radiations that affect regions of space thousands of light-years across.

These explosions produce highly collimated beams of hard gamma-rays that extend outward from a nova or supernova. Any unfortunate life-bearing planet that should come into contact with those beams would suffer a mass extinction (if not total extinction depending on its proximity to the event). Gamma-rays would eat up the ozone layer and indirectly cause the onset of an ice age due to the prevalence of NO2 molecules.

Life on Earth just got that much more tenuous.