March 13, 2006

Space: Not the Final Frontier

Our destiny isn't outer space but inner space—if we can avoid extinction

By George Dvorsky, January 20, 2004

So, it looks like the Americans are going to space again.

Since President Bush's recent announcement about the United State's renewed commitment to space exploration, it's been hard not to get caught up in all the excitement about proposed lunar colonies and manned expeditions to Mars. And I must admit, when I heard the news my arms filled with goose bumps and I was overwhelmed with a deep sense of history in the making.

Unsurprisingly, a significant number of commentators and futurists have declared this next stage of space exploration to be the starting block to something greater, namely humanity's expanding presence in the Galaxy. Adding fuel to this sentiment is the general consensus that an imperative exists to colonize space, lest we deny our natures and precariously leave all our eggs in one basket, risking the complete extinction of the species.

This assumption, that we are destined to become a star-hopping species, is nearly unanimous—one that's rarely, if ever, questioned. It is taken for granted that space is the next obvious realm for the advancement of human intelligence, the logical continuation of our individual and collective tendency to migrate, colonize and explore. It'll only be a matter of time, it is thought, before humanity finally hitches a ride on the wagon train to the stars, spreading its seed across the Galaxy and possibly even beyond.

The trouble with this thinking, however, is that it's likely wrong.

It is highly improbable that space exploration and colonization beyond our immediate surroundings is in our future. The evidence for this perspective is growing steadily, leading to some interesting and disturbing conclusions about the future and our place in it.

Simply put, our chance for survival this century is in serious doubt. The next 100 years will pit humanity against a series of apocalyptic threats unlike any encountered in our history.

And assuming we can avoid extinction and burst through to the other side of the technological "Singularity," it's more likely than not that the next stage of our evolution will see intelligence venturing into "inner space" rather than outer space.

Should this assumption prove true, our future will still be filled with remarkable explorations and experiences—they're just not going to happen where we thought.

Where the heck is everybody?

Anyone who suggests that interstellar exploration and colonization awaits us in the future must reconcile one rather glaring data point: We have yet to encounter signs of extraterrestrial intelligence when clearly we should have by now.

At first glance, this statement might sound odd, even presumptuous. But typical responses to this problem are almost always knee-jerk and often aloof, with people confidently proclaiming our cosmic uninterestingness or the incredible vastness of space as excuses for why we haven't shaken hands with ET.

But what such casual assumptions ignore is the mounting evidence suggesting that we really should have met someone by now, and that by consequence, we need to acknowledge that something very fishy is going on in the Universe.

This conundrum is known as the Fermi Paradox, named after the Italian physicist Enrico Fermi who first discovered the problem in 1950. Fermi calculated that any sufficiently advanced alien society should be able to colonize the entire Galaxy within 10 million years. While this length of time might seem extreme at first, it only represents 0.08% of the total age of the Galaxy, which is 12.6 billion years old. According to cosmologist Charles Lineweaver's estimates, planets started forming 9.1 billion years ago, so taking that into consideration the length of time to colonize the Galaxy is still a minuscule 0.1% of the total age of the Milky Way.

Thus, given our Galaxy's age, it could have been colonized many times by now—nine million to be exact. And as cosmologist Milan Cirkovic notes, "new complex-lifeform habitats cannot be expected to arise in a colonized Galaxy." Thus, we are forced to conclude that we live in an uncolonized Galaxy, bewildered as to why.

Whither the Von Neumann probes?

To add credence to Fermi's argument, mathematician John Von Neumann conceived of an efficient way for an advanced society to colonize the Galaxy. Dubbed the Von Neumann probe, it is a self-replicating machine that could travel from solar system to solar system replicating itself and spreading out at an exponential rate.

Thanks to Eric Drexler and his conception of molecular manufacturing, we now know that such a probe is theoretically possible. In fact, given the potential for strong nanotechnology, an intelligent probe could conceivably detect habitable planets and spawn any kind of complex molecular system, including intelligent biological life.

More to the point, the fact that we can already conceive of a way to colonize the Galaxy is telling; our descendants will undoubtedly look back at our naïve and primitive conceptions of Von Neumann relative to their more advanced colonization schemes. Simply put, galactic colonization should be a rather straightforward task. Variables such as distance, time and technological constraints must be rejected when trying to reconcile the Fermi Paradox. The question must be asked, then: Why isn't the Galaxy colonized? Why the Great Silence?

Possible explanations

There are many counterarguments that attempt to reconcile the paradox. It's possible, for example, that N in the Drake Equation is less than 1, meaning that we may be the only advanced civilization in the Galaxy. This is often referred to as the Rare Earth hypothesis.

Another credible idea is that the conditions required to foster the advancement of intelligent life have only recently been established in the Galaxy. Thus, we may be one of many civilizations currently working towards the capacity for interstellar exploration.

It's also conceivable that advanced intelligences quarantine themselves against threats such as digital viruses or, conversely, to prevent themselves from contaminating less advanced societies.

Also, idle Von Neumann probes may already be present in our solar system. Arthur C. Clarke's Von Neumann probe, as portrayed by a black monolith in 2001: A Space Odyssey, awaited humanity's progression to a specific stage of development before instigating the next phase of evolution for the species.

Such explanations, however, are in violation of Occam's Razor, which suggests that the simplest explanation is often the best. It's for this reason that a number of people point to the Rare Earth hypothesis as a reconciliation for the paradox.

Doom soon?

Another simple but disturbing explanation is the Great Filter argument. Some theorists, such as economist Robin Hanson, speculate that all intelligences reach a critical point in their development that is virtually insurmountable—a technological or evolutionary milestone that tends to result in extinction.

Evidence for this grim possibility is starting to emerge.

A number of philosophers who engage in thought experiments based around the anthropic principle (the general idea that laws governing the Universe are special in order for life and complexity to emerge) have posited the Doomsday Argument. Conceived by astrophysicist Brandon Carter and refined by Richard Gott and John Leslie, the argument suggests that we have grossly underestimated our location in the total roll call of all possible humans, including those to come in the future. According to the argument, we should use probabilistic reasoning to locate ourselves in the total roll call, an activity that leads us to conclude that we live in a non-special time in human history. In other words, it's more likely that we're closer to the end of all possible humans than the beginning.

What could possibly cause the mass extinction of all humanity in the near future? Unfortunately, lots of things.

Disturbed by the implications of the Doomsday Argument, and intent on doing something about it, transhumanist philosopher Nick Bostrom has compiled a list of possible human extinction scenarios. Similarly, Bill Joy published his now famous article "Why the Future Doesn't Need Us" in Wired magazine in 2000, and cosmologist Sir Martin Rees recently published his book Our Final Hour.

These thinkers tend to argue that the most significant risks involve 21st century technologies, including nanotechnology grey goo scenarios, deliberately or accidentally instigated bioengineered plagues, particle accelerator catastrophes, and the rise of artificial superintelligence .

SAI may be the gravest threat. The degree of scale between human and proposed greater-than-human intelligence is nontrivial. It could be as much as 5,000 times greater than human intelligence and endowed with the capacity for recursive self-improvement. The damage that could be inflicted by such an entity is nearly unfathomable, leading researchers such as artificial intelligence expert Eliezer Yudkowsky to devote his life's work to the problem.

The advent of SAI, in conjunction with the broader implications of rapidly accelerating technology, has been dubbed the "Singularity," the future point, or event horizon, in the human story that we cannot see beyond. The Singularity, it is thought, could either be a very bad thing for us, or a very good thing.

Space is in our "rear view mirror"

Developmental systems theorist John Smart is one thinker who believes that the Singularity may be a good thing. Arguing in favor of the transcension hypothesis, Smart has observed that, as intelligence develops, it increasingly organizes itself around smaller compressions of matter, energy, space and time. He believes this tendency points to directionality, and that intelligences, rather than venturing out into the "information desert" of space, seek out realms in the limitless richness of inner space. "I think all Universal intelligence follows a path of transcension," argues Smart, "not expansion."

Smart argues that there's a certain statistical inevitability to all this, that the Universe "organizes" intelligence in such a way that it is compelled to work towards a developmental Singularity, an evolutionary milestone representing an existential phase shift.

Post-Singularity intelligences, says Smart, will likely reside in digital ecologies, a realm akin to virtual reality, "a place where the technology stream flows so fast that new global rules emerge to describe the system's relation to the slower-moving elements in its vicinity, including our biological selves." Influenced by such thinkers as the Jesuit philosopher Pierre Teilard de Chardin, Smart considers the Internet to be the embryonic precursor to the "global brain" or noosphere—the next evolutionary paradigm shift in human communication and organization.

According to Smart, once intelligence saturates its local environment, it is constrained to leave local spacetime. He expects that event (a developmental singularity) to occur in a "cosmologically insignificant time" after the emergence of a technological singularity. "It learns how to enter hyperspace," argues Smart, "that suspected multidimensional environment hinted at in our string and supersymmetry theory, and within which cosmologists tell us new Universes may be born and other yet-uncertain events may happen."

I spoke to Smart about the Fermi problem, and he remarked that space will be in our "rear view mirror" as we move into our new stomping grounds. "There is so much space within even our own solar system that there seems to be no realistic possibility that we'll send intelligence even to the edge of it," says Smart. "No matter, we can simulate it to amazing levels already, with even the primitive eyes and brains we've developed locally."

To avoid the eggs-in-one-basket problem, and to prevent the extinction of Earth-based human and posthuman life, futurists such as Vernor Vinge have suggested that post-singularity intelligences will build local secondary systems. Supplementing this, redundancy could be achieved by placing a few Eganesque repositories of local knowledge off-Earth.

To counter natural disasters such as gamma ray bursts from nearby supernovas, posthumans could develop nanotechnology or femtotechnology shielding around repositories, while foreign objects such as asteroids could be easily detected and then redirected or destroyed.

Informed guesses

Of course, while intensely seductive and provocative, the Great Filter and Developmental Singularity hypotheses are just that, guesses. But given the existence of the Fermi problem, the Doomsday Argument, the dangerous technologies that are soon to be in our grasp and the observation that humanity is likely headed towards a radical phase shift, the idea that space is so incontrovertibly in our future is equally as speculative. The burden of proof is slowly starting to shift towards those who would suggest that we are headed for the stars.

As for the renewed interest in exploring our solar system, I'm generally in favor of it, and I eagerly await to see the extent to which we branch out. But while we engage in these explorations and advancements we must act responsibly and tackle high priority issues in an effort to avoid catastrophe this century. The existential risks we are facing will likely be upon us well before we develop the capacity to permanently sustain human life off-planet.

And assuming we can survive the next 100 years, it would appear that our destiny is likely not in the stars, but in ourselves, as we increase the depth of our capabilities and experiences rather than the span of our physical presence.

Copyright © 2004 George Dvorsky

This column originally appeared on Betterhumans, January 20, 2004.

Tags: , , , , , , , , .

post to del.icio.us

2 comments:

Anonymous said...

Then there's the interpretation of the anthropic principle which notes that only pure unadulterated human arrogance could enable "free-thinkers" to *believe* that we little humans could possibly *choose* to violate the ecobalance to which we are contributing members of.

That's the one that notes that... 'you can't fool mother nature'... idiots.

John Latter said...

My major interests are in Evolution and Psychology and one place that they 'meet' is that point in evolutionary history when psychological trauma first became possible.

As a rule of thumb, psychological trauma has its 'point of origin' within the 'old mammalian brain' - thereby indicating how long it has been part of Man's heritage.

From my perspective, the major question to be answered in the scenario of Man meeting an Alien species is whether or not they have 'solved' this particuler type of injury to life.

If they have, and we haven't, then I would imagine the best we could hope for is quarantine.

Only in the last century for example a child rearing expert gave this advice:

Truby King, as an example, once suggested that infants should be subjected to a fixed-feeding routine in order that they quickly become accustomed to the demands of Society. A logical way to treat infants?:

If an infant's next feed is due at 4 pm, but the infant unaccountably wakes up hungry at 3 pm, then the conditions for inflicting a 'hands-off' psychological trauma are all in place: the infant exists entirely in the 'present moment', has no awareness of the past, or that there will be a future (in which it might be fed). Cries of hunger will eventually turn to anger and should that anger reach an unsustainable peak then trauma may result. Systemizer parents would not even be aware that it had happened - the logic of "the next feed is not due until 4 pm" being unassailable.
[From The Absent-Minded Professor and Evolutionary Theory]

If we're still doing that to our own species then how will an Alien species that isn't re-act?