July 6, 2013

7 Totally Unexpected Outcomes That Could Follow the Singularity

By definition, the Technological Singularity is a blind spot in our predictive thinking. Futurists have a hard time imagining what life will be like after we create greater-than-human artificial intelligences. Here are seven outcomes of the Singularity that nobody thinks about — and which could leave us completely blindsided.
Top image: Ridwan Chandra.
For the purpose of this list, I decided to maintain a very loose definition of the Technological Singularity. My own personal preference is that of an intelligence explosion and the onset of multiple (and potentially competing) streams of both artificial general superintelligence (SAI) and weak AI. But the Singularity could also result in a kind of Kurzweilian future in which humanity has merged with machines. Or a Moravecian world in which our “mind children” have left the cradle to explore the cosmos, or a Hansonian society of competing uploads, featuring rapid economic and technological growth.
In addition to some of these scenarios, a Singularity could result in a complete existential shift for human civilization, like our conversion to digital life, or the rise of a world free from scarcity and suffering. Or it could result in a total disaster and a global apocalypse. Hugo de Garis has talked about a global struggle for power involving massively intelligent machines set against humanity — the so-called artilect war.
But there are some lesser known scenarios that are also worth keeping in mind, lest we be caught unawares. Here are seven of the most unexpected outcomes of the Singularity.

1. AI Wireheads

7 Totally Unexpected Outcomes That Could Follow the Singularity
It’s generally assumed that a self-improving artificial superintelligence (SAI) will strive to become progressively smarter. But what if cognitive enhancement is not the goal? What if an AI just wants to have fun? Some futurists and scifi writers have speculated that future humans will engage in the practice of wireheading — the artificial stimulation of the brain to experience pleasure (check out Larry Niven’s Known Space stories for some good examples). An AI might conclude, for example, that optimizing its capacity to experience pleasure is the most purposeful and worthwhile thing it could do. And indeed, evolution guides the behavior of animals in a similar fashion. Perhaps a transcending, self-modifying AI will not be immune to similar tendencies.
At the same time, an SAI could also interpret its utility function in such a way that it decides to wirehead the entire human population. It might do this, for example, if it was pre-programmed to be “safe” and consider the best interests of humans, thus taking its injunction to an extreme. Indeed, an AI could get its value system completely botched up by concluding that maximum amounts of pleasure is the highest possible utility for itself and for humans.
As an aside, futurist Stephen Omohundro disagrees with the AI wirehead prediction, arguing that AIs will work hard to avoid becoming wireheads because it would be harmful to their goalsImage: Mondolithic Studios.

2. “So long and thanks for all the virtual fish”

Imagine this scenario: The Technological Singularity happens — and the emerging SAI simply packs up and leaves. It could just launch itself into space and disappear forever.
7 Totally Unexpected Outcomes That Could Follow the Singularity
But in order for this scenario to make any sense, an SAI would have to conclude, for whatever reason, that interacting with human civilization is simply not worth the trouble; it's just time to leave Earth — Douglas Adams' dolphin-style.
Image: Colie Wertz.

3. The Rise of an Invisible Singleton

7 Totally Unexpected Outcomes That Could Follow the Singularity
It’s conceivable that a sufficiently advanced AI (or a transcending mind upload) could set itself up as a singleton — a hypothetical world order in which there is a single decision-making agency (or entity) at the highest level of control. But rather than make itself and its global monopoly obvious, this god-like AI could covertly exert control over the human population.
To do so, an SAI singleton would use surveillance (including reliable lie detection) and mind-control technologies, communication technologies, and other forms of artificial intelligence. Ultimately, it would work to prevent any threats to its own existence and supremacy, while exerting control over the most important parts of its territory, or domain — all the while remaining invisible in the background.

4. Our Very Own Butlerian Jihad

Another possibility is that humanity might actually defeat an artificial superintelligence — a totally unexpected outcome just based on the sheer improbability of it. No doubt, once a malign or misguided SAI (or even a weak AI) gets out of control, it will be very difficult, if not impossible, to stop. But humanity, perhaps in conjunction with a friendly AI, or by some other means, could fight back and find a way to beat it down before it can invoke its will over the planet and human affairs. Alternately, future humans could work to prevent it from coming about in the first place.
7 Totally Unexpected Outcomes That Could Follow the Singularity
Frank Herbert addressed these possibilities in the Dune series by virtue of the “Butlerian Jihad” — a cataclysmic event in which the “god of machine logic” was overthrown by humanity and a new fundamental tenet invoked: “Thou shalt not make a machine in the likeness of a human mind.” The Jihad resulted in the destruction of all intelligent machines and the rise of a new feudal society. It also resulted in the rise of the mentat order — humans with extraordinary cognitive abilities who functioned as virtual computers.

5. First Contact

Our transition to a post-Singularity civilization could also expose us to a larger, technologically advanced intergalactic community. There are a number of different possibilities, here — and not all of them good.
First, a post-Singularity civilization (or SAI) might quickly figure out how to communicate with extraterrestrials (either by receiving or transmitting). There may be a kind of cosmic internet that we’re oblivious to, but which only advanced civs might be able to detect (e.g. some kind of quantum communication scheme involving non-locality). Second, a kind of Prime Directive may be in effect — a galactic policy of non-interference in which ‘primitive’ civilizations are left alone. But instead of waiting for us to develop faster-than-light travel, an extraterrestrial civilization might be waiting for us to achieve and survive a Technological Singularity.
7 Totally Unexpected Outcomes That Could Follow the Singularity
Thirdly, and related to the last point, an alien civilization might also be waiting for us to reach the Singularity, at which time it will conduct a risk assessment to determine if our emerging SAI or post-Singularity civilization poses some kind of threat. If it doesn’t like what it sees, it could destroy us in an instant. Or it might just destroy us anyway, in an effort to enforce its galactic monopoly. This might actually be how berserker probes work; they sit idle in some location of the solar system, becoming active at the first sign of a pending Singularity.

6. Our Simulation Gets Shuts Down

7 Totally Unexpected Outcomes That Could Follow the Singularity
If we’re living in a giant computer simulation, it’s possible that we’re living in a so-called ancestor simulation — a simulation that’s being run by posthumans for some particular reason. It could be for entertainment, or for a science experiment. An ancestor simulation could also be run in tandem with many other simulations in order to create a large sample pool, or to allow for the introduction of different variables. Disturbingly, it’s possible that the simulations are only designed to reach a certain point in history — and that point could very well be the Singularity.
So if we reach that stage, everything could suddenly go dark. What’s more, the computational demands required to run a post-Singularity simulation of a civilization could be enormous. The clock rate, or even rendering time, of the simulation could result in the simulation running so slowly that the posthumans would no longer have any practical use for it. They’d probably just shut it down.

7. The AI Starts to Hack Into the Universe

7 Totally Unexpected Outcomes That Could Follow the Singularity
Admittedly, this one’s pretty speculative (not that the other ones haven’t been!) — but think of it as a kind of ‘we don’t know what we don’t know’ sort of thing. A sufficiently advanced SAI could start to see directly into the fabric of the cosmos and figure out how to hack into its ‘code.’ It could start to mess around with the universe to further its needs, perhaps by making subtle alterations to the laws of the universe itself, or by finding (or engineering) an ‘escape hatch’ in order to avoid the inevitable onslaught of entropy. Alternately, an SAI could construct a basement universe — a small artificially created universe linked to the current universe by a wormhole. This could then be used for living space, computing, or as a way to escape the eventual heat death of the parent universe.
Or, an SAI could migrate and disappear into an exceedingly small living space (what the futurist John Smart refers to as STEM space — highly compressed areas of space, time, energy, and matter) and conduct its business there. In such a scenario, an advanced AI would remain completely oblivious to us puny meatbags; to an SAI, the idea of conversing with humans might be akin to us wanting to have a conversation with a plant.
This article originally appeared at io9

Will Old People Take Over the World?

One of the consequences of radical life extension is the potential for a gerontocracy to set in — the entrenchment of a senior elite who will hold on to their power and wealth, while dominating politics, finance, and academia. Some critics worry that society will start to stagnate as the younger generations become increasingly frustrated and marginalized. But while these concerns need to be considered, a future filled with undying seniors will not be as bad as some might think, and here’s why.
Indeed, the human lifespan is set to get increasingly longer and longer. And it’s more than just extending life — it’s about extending healthy life. A common misconception amongst the critics is that we’re setting ourselves up for, as political scientist Francis Fukuyama put it, a “nursing home world” filled with decrepit old folk who are leeching off society’s resources.

A Genuine Possibility?

But nothing could be further from the truth. If we assume that the aging process can be dramatically slowed down, or even halted, it’s more than likely that the older generations will continue to serve as vibrant and active members of our society. And given that seniors tend to hold positions of power and influence in our society, it’s conceivable that they’ll refuse to be forced into retirement on the grounds that such an imposition would violate their human rights (and they’d be correct in that assessment).
In turn, seniors will continue to lead their corporations as CEOs and CFOs. They’ll hold onto their wealth and political seats, kept in power by highly sympathetic and demographically significant elderly populations. And they’ll occupy positions of influence at universities and other institutions.
Will Old People Take Over the World?
And we have the precedents to prove it. Politicians, including senators and various committee members, do a good job holding on to power and influence in their legislatures. U.S. judges can serve for life. Non-democratic countries are particularly notorious for setting up gerontocracies, the most notable example being the Soviet Union during and after the Brezhnev era. And religious institutions, like the Roman Catholic Church, are especially sympathetic to senior leaders.
It’s also a prospect that’s been covered extensively in scifi, including Bruce Sterling’s Holy Firein which gerontocrats wield almost all capital and political power, while the younger populations live as outsiders. Frederik Pohl’s Search the Sky features a gerontocracy masquerading as a democracy. It's a theme that was also addressed in the 1967 novel Logan's Run, written by William F. Nolan and George Clayton Johnson. In this story, an ageist society, in order to thwart elderly influence and a drain on valuable resources, executes everyone over the age of 21.

The Concerns

Indeed, much of the worry has to do with concerns of social inequality and the marginalization of the younger generations. Already today, graduates have a hard time finding jobs and “breaking in” to the corporate world. Life and health extension could dramatically reduce job turnover even further. Feelings of inter-generational resentment and angst could start to creep in.
Another fear is that society could start to stagnate and become risk-averse. The common charge is that seniors are, by their nature, conservative and “set in their ways.” Social and cultural progress, like marriage reform, could come to a grinding halt.
Similarly, there’s concern that gerontocracies could hold academics back. It may become increasingly difficult for radical and unconventional scientific concepts to gain acceptance. As the quantum physicist Max Planck famously said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather its opponents eventually die, and a new generation grows up that is familiar with it.”

Adapting to Extended Lives

But not everyone’s convinced this is going to be a problem. One such voice comes from the sociologist and futurist James Hughes who works at Trinity College in Connecticut. I asked him if a gerontocracy is something we genuinely need to be concerned about.
“There are so many more important forms of unequal power in society that it is hard not to see hand-wringing about gerontocracy as an attempt to distract from corporate malfeasance, patriarchy, white skin privilege, lookism, and so on,” he told io9. “But yes, gerontocracy is one form of power, and there are some ways that our democratic society has ensured health insurance and income stability for seniors that it hasn't done for working adults simply because seniors are more likely to vote. Is that gerontocracy, though, or the way a democratic society works? People who can't get themselves organized to demand and defend services get less of them.”
But this inequality, says Hughes, will quickly be erased. His suspicion is that, as we adapt to radical life and health extension, one of the fights that’s set to settle in is the raising and eventual elimination of the retirement age. It’s a fight, he says, that will make the worries about a gerontocracy seem quaint.
“Optimally, however, this struggle will not just sharpen generational antagonism, as portrayed in Christopher Buckley's novel Boomsday,” he says, “but lead to a more equitable and universal system of income support and social services not based on age.”
So I asked Hughes how society could be hurt if an undying generation refuses to relinquish their hold on power and capital.
“Again, the question should be, how is society hurt when small unaccountable elites control the vast majority of wealth?,” he responded. The age of super-wealthy is pretty immaterial, he says, especially when most of the people in their age bracket will be as poor and powerless as younger cohorts.
“If the wealthy avail themselves of longevity treatments and cognitive enhancements that the hoi polloi can't afford, and thereby start a feedback loop of privilege — ability and longevity that threatens to create a super-aristocratic master race — then the demand for making those therapies available to everyone will become politically irresistible,” he says. “It’s not that it will happen painlessly, but the democratization of the wealth and longevity technologies of elites is more or less inevitable.”

Simple-minded Futurism

Hughes also doesn’t buy into the argument that radical life extension will result in the stagnation of society. If anything, he thinks these claims, such as risk-aversion and inflexibility, smack of ageism and simple-minded futurism.
Will Old People Take Over the World?
“Gerontology has dispelled the notion that people become any more conservative as they age,” he told me. “They do maintain many of the tastes and beliefs of their youth, and since older cohorts in the last century were always less educated than the younger cohorts, they tended to have less of the cosmopolitanism and liberal outlook of younger cohorts.” But the dramatic evolution of older cohorts' views on issues like minority, women's and gay rights, says Hughes, show that age is no barrier to changing your mind on deeply held values.
And as to the “abysmal futurism of the geronto-phobes,” (he's thinking of Francis Fukuyama and Leon Kass in particular) the principal thing, argues Hughes, is that they’re overlooking the ways in which scientists are figuring out how to boost the body's natural production of stem cells in order to repair disease.
“Seniors' brains continue to make stem cells,” says Hughes, “and when we are able to boost neural stem cell generation in order to forestall the neurodegeneration of aging, older people will become as cognitively flexible as younger people.” Hughes points to Sterling's Holy Fire as a prime example of this possibility.
Ultimately, says Hughes, what the growing literature on aging, emotions and violence does suggest is that an older world will be more serene and far less violent.
“Younger people experience more swings of positive and negative emotions, and young men are responsible for the bulk of violence and crime. Older people are more satisfied with their lives and have more of an even keel.”
In a world awash with technologies of mass destruction, says Hughes, a strong dose of senior wisdom may be precisely what we need.
This article originally appeared at io9

June 29, 2013

These Unresolved Ethical Questions Are About to Get Real

As our technologies take us from the theoretical to the practical, a number of thorny moral quandaries remain unanswered. Here are important unresolved ethical questions that are on the verge of becoming highly relevant.

Should people be allowed to clone themselves?

There’s currently a global moratorium on human cloning. But you just know that’s not going to last. Back in 2007, Korean researcher Hwang Woo-suk faked a human cloning breakthrough, and it’ll only be a matter of time before some renegade scientist actually does it. This year has already seen two major advancements in this area, including the use of cloning to create embryonic stem cells and a new technique where mammalian cloning lines can be extended and reproduced indefinitely.
Many people consider the act of human cloning to be an affront to our dignity and individuality. It’s also seen by some as an incredibly selfish and egotistical act. Others worry about the potential for clones to be exploited or abused. On the flip-side of the debate, supporters say there’s no harm done so long as the rights of clones are recognized. A common argument in support is that clones are essentially delayed twins. And yet others say it’s a perfectly legitimate way to create biological offspring — that it’s a novel form of assisted human reproduction that could help same-sex or infertile couples reproduce.

Is it okay to introduce non-human DNA in our genome?

These Unresolved Ethical Questions Are About to Get Real
This branch of science is called transgenics— the intermingling of human and non-human genetic information. Scientists endow lab animals with bits of human DNA all the time, but the opposite most assuredly doesn’t happen. And in fact, it’s illegal virtually everywhere. Some worry about the creation of chimeras — creatures that are part-human and part-something-else. Supporters say that it could result in novel therapies. It’s possible, for example, that a non-human animal has a natural immunity to a disease. Wouldn’t we want to endow ourselves with this same immunity? More radically and speculatively, it’s also possible that more substantive animal characteristics could be introduced into humans (bird vision, dog hearing, dolphin fins, etc.). If so, what’s the harm? Would we diminish what it means to be human?

Should parents be allowed to design their babies?

Should we allow a Gattaca-like world to come into existence? Like human cloning, the idea of genetically modifying our offspring still falls within the realms of illegality and taboo. Its supporters call it human trait selection; it’s opponents derogatively refer to it as designer babies. Either way, it would allow parents to select the characteristics of their progeny, including non-medical attributes like hair and eye color, height, intelligence, greater empathy, sexual orientation, personality type, and basically any other genetically influenced trait.
These Unresolved Ethical Questions Are About to Get RealSEXPAND
Its detractors complain that it’s simply a way for parents to control the destiny of their offspring. They also worry that an arms race could occur, where parents will feel compelled to modify their offspring as a way to keep up with the Jones's baby. Some are concerned about the potential for abuse — like parents giving their children superfluous physical characteristics (such as extreme height, or even silly things like a tail).
Supporters, on the other hand, say it’s a form of reproductive autonomy, and that well-informed and well-intentioned parents — in conjunction with the laws and their fertility doctor — are well within their rights. Others argue that human trait selection is inherently good, and that parents are simply looking to maximize their child’s potential.

What are the most important areas of scientific research?

Our civilization is currently facing a number of grave challenges — everything from superstorms through to epidemics and the rise of apocalyptic threats. So, when it comes to the funding of important scientific research, what makes the most sense?
These Unresolved Ethical Questions Are About to Get RealSEXPAND
Image via China.org.cn.
Given the looming threat of global warming, some would say that we should we invest in climate science and various geoengineering schemes. There’s also the threat of a global pandemic, like the avian flu. Shouldn’t that be our greatest concern? Or what about the potential for powerful technologies to serve as potential game-changers — things that could actually fix our planet. It's reasonable to argue that we should invest in additive manufacturing techniques (like 3D printing), molecular nanotechnology — and even artificial intelligence. Which brings up another important area: research into mitigating existential risks.

Should people be forced to die once indefinite lifespans are achieved?

The day will eventually come when the problem that is biological aging is finally solved. Needless to say, the advent of indefinite lifespans could result in some serious negative consequences, including overpopulation, the rise of a gerontocracywidespread boredom and restlessness, and a de-valuing of life. And in fact, in consideration of these possibilities, political scientist Francis Fukuyama — back when he was serving on George W. Bush’s bioethics council — said that governments have the right to tell their citizens that they have to die. It would be a kind of Logan’s Run world.
Such a turn of events would be highly problematic, to say the least, and a complete affront to our civil rights (i.e. the right to medical treatment, the right to life, the right to self-determination etc.). So how are we going to deal with the prospect of indefinite lifespans once they start to emerge? And what about the right to end one’s life?

Should we have guaranteed universal income?

Within a few decades, the global economy could face a collapse the likes of which we've never seen. As robots replace manual workers, and as thought workers start to get replaced by artificial intelligence, unemployment rates could reach staggering levels. The concentration of wealth could become extremely atomized. It would be a disruption similar to the one caused by the Great Depression — an economic and social catastrophe that ushered in the modern welfare state. Should this second Great Depression occur, there could be calls for a guaranteed universal income — a social policy that ensures everyone gets a steady paycheck to make sure basic needs are met. Of course, not everyone will be thrilled with this idea; a population dependant on the government — or more accurately, the forced distribution of wealth — certainly rubs conservative elements the wrong way.

Which animals have moral value?

Last year, an international group of scientists signed the Cambridge Declaration on Consciousness in which they proclaimed their support for the idea that many animals are conscious and aware to the degree that humans are — a list of animals that includes all mammals, birds, and even the octopus. As we’re also learning, insects also exhibit someremarkable cognitive capacities. A question is starting to emerge about the moral relevance of these animals, and whether or not we should take more care in ensuring the well-being. To what extent should we work to reduce suffering in the world?
Needless to say, not everyone is onboard with these ideas. It’s largely taken for granted, owing to our position of privilege, that we can exploit animals and use them as we see fit, whether it be for meat, our entertainment, or for medical testing purposes.

Can only humans be persons?

Further, there’s also the issue of non-human animal personhood — the notion that some animals, owing to complex cognitive and emotional attributes, deserve the same sorts of legal protections afforded to all humans. Specifically, these animals would include all great apes, cetaceans (dolphins and whales), and elephants. Looking further ahead, there’s even the potential for artificial intelligence to have not just moral value, but personhood designation itself.

Many would argue that only humans can be persons. This is the basic tenet of human exceptionalism — the idea that humans should always occupy an exalted place atop the food chain, and that there’s something inherently and intangibly special about Homo sapiens.

Should we biologically enhance non-human animals?

Somewhat related to the last point, there’s also the potential for animal uplift. Just last year, scientists demonstrated that a brain implant can improve thinking ability in primates. In short order, and as a consequence of testing human augmentation technologies on animals, we will have it within our means to significantly enhance their cognitive capacities as well. As I’ve argued in the past, we may actually be morally obligated to do this as we bring the entire biosphere into a post-biological, post-Darwinian existence. But others decry this as a form of human imperialism, and as a way to impose human characteristics on animals. Some would simply say that we shouldn’t mess with nature and that it’s none of our business to modify animals in this way.

Do people living in the present have more value than future persons?

This is a classic question that has baffled moral theorists for years, and it’s one that could soon become quite topical. If we’re to deal with climate change and prevent the exhaustion of our planet’s non-renewable resources, we may be forced to scale back our civilization to ensure ongoing sustainability. Otherwise, future generations will have to reap what we sow. The answer, some would say, is to pull back and live simpler lives. But should people living in the here-and-now have to worry and make sacrifices for people who haven’t even been born yet? But what if things are better in the future? Would it all have been worth it?
This article originally appeared at io9.
Top image via Splice

June 27, 2013

This is what it’s like to shake hands with the future

Meet Nigel Ackland, the recipient of the Bebionic 3 artificial hand — the world’s most advanced cybernetic limb. He was just one of over 30 remarkable scientists, technologists, and futurists who spoke at the recently concluded Global Future 2045 Congress held in New York.
I recently returned from GF2045 after an intense and jam-packed weekend in NYC. The congress, which was organized by the 32-year-old Russian entrepreneur and futurist Dmitry Itskov, was an extension of his 2045 Initiative  an attempt to upload a fully conscious mind to an avatar by mid-century.
To explore and promote this possibility, Itskov assembled a diverse and eclectic set of thinkers that included futurist Ray Kurzweil, biologist George Church, mind theorist Marvin Minsky (via pre-recorded video as he is currently ill), optogenetics neuroengineer Ed Boyden, X-Prize founder Peter Diamandis, and many, many others.
This is what it’s like to shake hands with the future
But the big hit of the conference had to be Nigel Ackland, a 53-year-old former precious metals worker from Royston, Cambridgeshire, who lost his right hand when it became caught in an industrial blending machine at a smelting plant in 2006. The severity of his injury left him with a flared stump and difficulty finding a suitable artificial limb. Initially, the best that doctors could do for him was a replacement hook.
But in May of last year, Leeds-based prosthetics company RSLSteeper asked Ackland if he would trial its latest artificial hand — the most high-tech available in the world.
Speaking on stage for the very first time, Ackland demo’d the sleek, black device to a rapt audience. He operated the limb by sending the same brain signals he used to move his original arm. Sensors on the artificial limb can pick up these signals and trigger one of 14 pre-programmed movements, including grips, wrist movements — and even individual finger movements (like a forefinger pincer motion).
The crowd gasped when he rotated the wrist a full 360 degrees.
After his talk, and for the duration of the congress, Ackland was swarmed by attendees. At one point there was literally a line-up of people waiting to have their picture taken with him. Cyborgs, it would appear, have very quickly become our heros.
When I finally managed to meet Ackland, I asked him what he thought about all the attention.
“I absolutely love it,” he told me. “And it certainly beats the alterative — being completely ignored.”
He was referring to the previous six years — a challenging period of time before the artificial hand when very few people approached him on account of his hook. The artificial hand, it would seem, has certainly changed his fortunes.
“Having a bionic hand makes me feel like a human again,” he told me.
This article originally appeared at io9.

June 1, 2013

Are We Screwing Ourselves By Transmitting Radio Signals Into Space?

For nearly a hundred years, Earth has sent radio signals into space. If anyone nearby is listening, they probably know we’re here. In light of this, a new paper assesses the potential danger presented by such signals, concluding that the benefits outweigh the risks. But how can we really know? 
Top image: Scene from Battleship (2012), a film in which an alien civilization discovers Earth by detecting its radio emissions. 

Leaky Earth?

We’ve been shouting out into the cosmos for quite some time now. Electromagnetic waves of various intensities and frequencies have been streaming away from Earth for well over a century, the remnants of TV broadcasts, mobile phone conversations, satellite transmissions, and military, civil and astronomical radars.
We’ve even deliberately tried to get ET’s attention — a controversial practice known as METI (Messaging to Extraterrestrial Intelligences). There have been many such attempts, including the 2001 Teen-Age Message to the Stars organized by the Russian cosmologist Alexander Zaitsev. His work, and those of others, have been criticized as being insanely risky given the dearth of information we have about the nature of ETIs. Two years ago, John Billingham and James Benford called for a global moratorium on METI, an initiative similar to the one David Brin and myself worked on last decade.
But now, owing to all this human activity, the Earth has a radiosphere that’s inexorably billowing outwards at the speed of light — a clear signal that’s just waiting to be picked up.
And indeed, according to the new paper’s authors, Jacob Haqq-Misra, Michael Busch, Sanjoy Som, and Seth Baum, this leakage could in fact be detected by an extraterrestrial intelligence (ETI) armed with the right listening equipment.
Our signals decrease in intensity as they leak out into the cosmos. But depending on the signal’s strength and frequency, these waves can propagate for cosmologically vast distances and still carry enough information to connote the presence of intelligent life.
Arecibo Observatory. Credit: H. Schweiker/WIYN and NOAO/AURA/NSF.
The Arecibo Planetary Radar in Puerto Rico provides a good example. As the researchers note, at a transmitting power of 0.8 MW and a frequency of 2,380 MHz, the APR’s powerful signal could be picked up by a “watcher” with a 1 km2 receiving antenna at distances of up to 200,000 light years!
Credit: Haqq-Misra et al.
That's a rather extraordinary claim, so I spoke to SETI expert and scifi novelist David Brin about it — and he's not convinced detection is this easy. He tole me that, even if an ETI had a one square kilometer array, they would have to point it a at Earth for the duration of an entire year. "Because it would take that long," he told io9. "But why stare if you don't already have a reason to suspect?"
Like SETI Institute's Seth Shostak, Brin believes that Earth is not detectable beyond five light years. 
"With one exception: Narrow-focused, coherent (laser-like) planetary radars that are aimed to briefly scan the surfaces of asteroids and moons," he says, "And not to be confused with military radars that disperse."
This new paper, says Brin, is very unconvincing about detectability of leakage.

To transmit or not to transmit?

Earth's leakiness aside, we also need to know if anyone out there is even listening.
As the authors note, some SETI experts contend that, if an ETI really wanted to know that we’re here, they could locate us without having to listen for our radio waves. For example, they could figure out that life is here by analyzing the spectrum of reflected ultraviolet, optical, and near-infrared sunlight from the Earth’s surface and atmosphere. Or, an ETI could learn of our technological civilization by detecting artificial nighttime lighting of large urban areas, or by detecting exaggerated amounts of carbon dioxide in the atmosphere.
More conceptually, advanced civs could could pepper the galaxy with Bracewell communication probes — a point the authors fail to mention in the paper; they could already be in the neighbourhood waiting for a particular signal. 
There’s always the possibility, of course, that we’re too far away from the nearest ETI. Or that alien life is rare. Depending on how one fills in the Drake Equation, there could be anywhere from one (just us) to 100,000 — or even millions — of civs currently residing in the galaxy. But we just don’t know, so we don’t really have a good way of knowing how detectable we really are.
Furthermore, and as Brin pointed out to me, the authors failed to address the possibility of colonization. "If travel happens, then the number of sites skyrockets," he says.
Also, ETIs may be able to detect our signals, but they may not be able to make any sense of them — but this is unlikely. If an alien receives a METI signal, it would likely consist of easily decipherable mathematical concepts built upon a computational language (a la Carl Sagan’s Contact). Radio leakage, on the other hand, would be meaningless and almost completely devoid of context. But as the authors write:
Earth’s radio leakage and deliberate transmissions will likely be identifiable by ETI as a technological signature because no other examples of such signals exist in nature. The ability of ETI to decipher or interpret the content of a signal is therefore irrelevant to their ability to use it to learn that humans exist...
But assuming detection is not easy, and that there are other variables at play (like cosmologically vast distances and the potential for many short-lived civilizations), we still need to ask whether or not humanity would benefit or face terrible consequences from alien contact.

To transmit or not to transmit

And indeed, as the authors note, a standard risk assessment is in fact warranted: We should evaluate the probability of an event occurring by multiplying the magnitude of the harm from that event if it does occur.  
Sure — sounds sensible. But it’s difficult, if not impossible, to assess the magnitude of the harms that could come from ETI contact. We don’t know the nature of the interactions, nor do we know alien ethics (particularly from a super-technological machine-based civilization).
We also don’t know how we’d interact. The entire relationship could be conducted via remote messaging. But they could send us something rather nasty. At the same time, a positive, non-malicious message could really benefit us. An ETI could provide information about itself or its technologies which could advance and greatly influence the human condition.
Alien contact could also have positive and negative outcomes for many societal structures, religions, and philosophies; different human groups would be affected differently. Interstellar civilizational encounters could be similar to — if not considerably worse than — Europeans who made first contact with stone age societies.
There is another possibility — that the vast majority of our transmissions, and those of a civilization for that matter, will be detected long after we’re gone. Consequently, this is all a futile exercise. If the galaxy is littered with short-lived civilizations — a possible reconciliation of the annoying Fermi Paradox — all radio-transmitting ETIs are essentially sending “time capsule” messages or trace-signatures into space. The galaxy could be awash with echos from extinct civilizations. Determining civilizational longevity, therefore, is crucial to our assessment of the risks and benefits of transmitting into space.
Which brings up another interesting point: Maybe there’s value in transmitting a comprehensive “time-capsule” into space as a way of archiving or preserving our civilization’s vast history. If we go extinct, at least some other civilization may learn about us. Or more romantically, we’ll rest knowing that our signals are propagating through space long after we’re gone.  
So, in response to the question of whether or not we should transmit, the authors write:
[B]ecause we cannot estimate the probability or magnitude of contact with ETI, we make no attempt to calculate the term. By extension, any conclusions that depend on knowing are conditional.
Which seems like a rather wishy-washy answer. The authors conclude that “the benefits of radio communication on Earth today outweigh any benefits or harms that could arise from contact with ETI.” What they mean is that it may be more important to our security and survival if we continue to develop powerful communications technologies; it’s simply too valuable (and disruptive) to give up.
But how can they possibly know for sure!? Brin referred to it as "arm-waving mumbo jumbo" — and an "utterly tendentious and unsupported claim."
In regards to METI, the authors conclude that current efforts, which are weak and mostly symbolic, are mostly harmless:
These transmissions create benefits such as opportunities for educational public outreach and the ability to develop scientific groundwork for future METI projects. The costs associated with METI at low levels of detectability are small, so such projects create overall positive value for humanity and should continue.
But ramping up the METI project, like creating powerful beacons, could result in highly uncertain outcomes. Mercifully, the authors conclude that governments and other agencies need to get their act together and start talking about it.
“Even if we never succeed in receiving a message from an extraterrestrial civilization, METI may still prove a worthwhile investment as a way to increase humanity’s awareness of itself in the greater cosmos.”
Unless, of course, someone is in fact listening, and they'd like to pay us a visit...
Read the entire study at Space Policy: “The Benefits and Harms of Transmitting Into Space.”

This article originally appeared at io9.

Postscript

David Brin wrote this to me in a follow-up email:

The authors' arguments boil down to excusing METI via what is known as the "barn door argument -- the notion that Earth civilization is already drawing attention to itself (the horses are already out) so why bother restricting anyone from shouting into the cosmos (don't bother closing the barn door once the horses are gone.)  It is both specious and manipulatively hypocritical on several levels. 
1) The paper is flawed because it does not even discuss the dwell or integration time that an alien square kilometer array (SKA) must dedicate, staring solely at Earth for an extended period, in order to pick out signal from noise.  If that time is long, and most scholars think so, then no civilization will do it unless they already suspect there is something or someone here!  That is, unless they have gobs and gobs of SKAs to play with. Both of those are possible, for varied reasons.  But they aren't super-likely. 
2) It is disingenuous to imply that METI beams -- e.g. Zaitsev's from Evpatoria -- are just like the radars used by the USAF to characterize orbital debris.  I'd like to see Dr. Busch and his colleagues defend that implication.  Narrow, coherent, laser-like and powerful, beams like the ones used by Zaitsev to do his cosmic shouting are like a lighthouse next to a flashlight. 
3) The Barn Door excuse takes "disingenuous" to a level that tips over into outright sophistry and deceit.  Let us ask, if aliens already detect us, why are some fetishists so eager to blast away "yoo-hoo" shouts into space?  They aim to accomplish a major and dramatic change in the visibility of Earth civilization... they say so publicly.  Hence, the Barn Door excuse is a travesty of verbal legerdemain. 
Busch & company then dive into the worst part of the paper, a razzle-dazzle arm-waving of "risk factors" that bear no relationship to the way the science of risk analysis operates, conjuring inputs out of thin air and then declaring or "positing" that the likely good outweighs any calculation of possible bad outcomes.  This exercise was too grimly awful to even be amusing, especially since the "dissidents" in the SETI community, including John Billingham, Michael Michaud and myself, have not asked for a ban on transmissions from Earth, only widespread and eclectic collegial discussion of this issue, with inputs by experts who actually know about the many and varied risk factors involved. 
Reiterating, the thing we have asked for is a wider discussion, beyond the insular community of SETI fans and a few dozen radio astronomers, of a matter that could have great bearing on the success - and even survival - of our descendants. We seek a vast and fascinating exchange, bringing together the planet's best minds to enthrall the public with open deliberation of all factors.  Those who refuse such discussion - shrugging aside any need or moral obligation to consult the rest of us - are the ones practicing censorship.  They are the ones engaging in reckless assumptions, willing to wager our posterity on a few "posits" on the back of an envelope.