February 16, 2013

Who should pay when your robot breaks the law?


Robots are unquestioningly getting more sophisticated by the year, and as a result, are becoming an indelible part of our daily lives. But as we start to increase our interactions and dependance on robots, an important question needs to be asked: What would happen if a robot actually committed a crime, or even hurt someone — either deliberately or by mistake?

While our first inclination might be to blame the robot, the matter of apportioning blame is considerably more complicated and nuanced than that. Like any incident involving an alleged criminal act, we need to consider an entire host of factors. Let's take a deeper look and find out who should pay when your robot breaks the law.

To better understand this issue I spoke to robot ethics expert Patrick Lin, the Director of Ethics + Emerging Sciences Group at California Polytechnic State University. It was through my conversation with him that I learned just how pertinent this issue is becoming. As Lin told me, "Any number of parties could be held responsible for robot misbehaviour today."

Robot and machine ethics


Before we get too far along in the discussion, a distinction needs to be made between two different fields of study: robot ethics and machine ethics.

We are currently in the age of robot ethics, where the concern lies with how and why robots are designed, constructed, and used. This includes such things as domestic robots like Roomba, self-driving cars, and the potential for autonomous killing machines on the battlefield. These robots, while capable of "acting" without human oversight, are essentially mindless automatons. Robot ethics, therefore, is primarily concerned with the appropriateness of their use.

Machine ethics, on the other hand, is a bit more speculative in that it considers the future potential for robots (or more accurately, their embodied artificially intelligent programming) to have self-awareness and the capacity for moral thought. Consequently, machine ethics is concerned with the actual behavior and actions of advanced robots.

So, before any blame can get assigned to a robot for any nefarious action, we would need to decide which of these two categories apply. For now and the immediate future, robot ethics most certainly qualifies, in which case accountability should to be attributed to either the manufacturer, the owner, and in some cases even the victim.

But looking further into the future to a time when robots match our own level of moral sophistication, the day is coming when they will very likely to have to answer for their crimes.

Manufacturer liability


For now and the foreseeable future, culpability for a robot that has gone wrong will usually fall on the manufacturer. "When it comes to more basic autonomous machines and systems," said Lin, "a manufacturer needs to ensure that any software or hardware defect should have been foreseen."

He cited the hypothetical example of a Roomba that experiences a perfect storm of confusion — a set of variables that the manufacturer could not have anticipated. "One could imagine the Roomba falling off an edge and landing right on top of a cat," he said, "in which case it could be said that the manufacturer is responsible."

Indeed, because the robot is just operating according to the limits of its programming, it cannot be held accountable for its actions. There was absolutely no malice involved. And assuming that the robot was being used according to instructions and not modified in any way, the consumer shouldn't be held liable either.

Outside intended use


Which, as Lin pointed out, raises another issue.

"It's also possible that owners will misuse their robots and hack directly into them," he said. Lin pointed to the example of home defense robots that are being increasingly used in Asia — including robots that go on home patrol and can shoot pepper spray and paint-ball guns. "It's conceivable that someone might want to weaponize the Roomba," he told me, "in which case the owner would be on the hook and not the manufacturer." In such a scenario, the robot would act in a way completely outside of its intended use, thus absolving the manufacturer from liability.

But as Lin clarified for us, it's still not as cut-and-dry as that. "Just because the owner modified the robot to do things that the manufacturer never intended or could never foresee doesn't mean they're completely off the hook," he said. "Some might argue that the manufacturer should have foreseen the possibility of hacking, or other such modifications, and in turn build in safeguards to prevent this kind of manipulation."

Blame the victim


And there are still yet other scenarios in which even the victim could be held responsible. "Consider self-driving cars," said Lin, "and the possibility that a jay-walker could suddenly run across the street and get hit." In such a case it's the victim that's really to blame.

And indeed, one can imagine a entire host of scenarios in which people, through their inattention or recklessness, fall prey to the growing number of powerful and autonomous machinery around them.

Machines that are supposed to kill


Complicating all this yet even further is the potential for autonomous killing machines.

Currently, combat drones are guided remotely by human operators, who are in turn responsible for any violent action committed by the device. If an operator kills a civilian or fellow soldier by mistake, they will have to answer for their mistake and likely face a military tribunal depending on the circumstances.

But that said, there are already sentry bots on duty in Israel and S. Korea. What would happen if one of these robots were to kill somebody by mistake? Actually, as Lin informed us, it's already happened. Back in October 2007 a semi-autonomous robotic canon deployed by the South African army malfunctioned, killing nine "friendly" soldiers and wounding 14 others.

It would be all too convenient, and even instinctive, to blame the robot for an incident like this. But because these systems lack any kind of moral awareness, they cannot be held responsible.

Who, therefore, should account for such an egregious mistake? The person who deployed the machine? The procurement officer? The developer of the technology? Or as Lin asked, "Just how far up the chain of command should we go — and would we ever go so far as to implicate the President, who technically speaking is the Commander-in-Chief?"

Ultimately, suggested Lin, these incidents will have to be treated on a case-by-case basis. "It will all depend on the actual scenario," he said.

Quasi-persons


Looking ahead to the future, there's the potential for a kind of behavioral grey area to emerge between a fairly advanced AI and a fully robust moral machine. It's conceivable that a precursor moral AI will be developed that has a very limited sense of self-awareness and personal responsibility — but a sense of subjectivity and awareness nonetheless. There's also the potential for robots to have ethics programmed right into them.

Unlike more simple automatons, these machines would be capable of actual decision making — albeit at a very rudimentary level. In a sense, they'd be very much like children — who, depending on their age, aren't entirely held accountable for their actions.

"There's a kind of strange disconnect when it comes to robot ethics," noted Lin, "in that we're expecting near perfect behavior from robots when we don't really expect it from ourselves." He agrees that children are a kind of special case, and that they're essentially quasi-persons. Robots, he argues, may have to regarded in a similar way.

Consequently, owners of robots would have to serve as parents or guardians, ensuring that they learn and behave appropriately — and in some cases even take full responsibility for their actions. "It's the same with children," said Lin, "there will have to be a sliding scale of responsibility for robots depending on how sophisticated they are."

The rise of moral machines


And finally, there's the potential for bona fide moral machines — those robots capable of knowing right from wrong. But again, this is still going to prove a tricky area. An artificially intelligent robot will be endowed with a very different kind of mind than one possessed by a human. By its very nature it will think very different than we do. And by consequence, it will be very difficult to know its exact inner cogitations.

But as Lin noted, this is an area that, as humans, we're still struggling to deal with ourselves. He noted how the latest neuroscience suggests that we may not have as much free will as we think. Indeed, courts are beginning to have difficulty in assigning blame to those who may suffer from biological impairments.

All this said, could we ever prove, for example, that a robot can act out of free will? Or that it truly understands the consequences of its actions? And does it really feel empathy?

If the answers are yes, then a robot could truly be made to pay for its crimes.

But more conceptually, these questions are important because, as a society, we tend to confer rights and freedoms to those persons capable of such thoughts. Thus, if we could ever prove that a robot is capable of moral action and introspection, we would not only have to hold it accountable for its actions, we would also have to endow it with fundamental rights and protections.

It would appear, therefore, that we're not too far from the day when robots will start to demand their one phone call.

This article originally appeared at io9.

February 8, 2013

Is SETI at risk of downloading a malicious virus from outer space?


We take it for granted that the search for extraterrestrial intelligence (SETI) is a safe endeavor. Seriously, what could possibly go wrong with passively searching for interstellar radio signals? Unfortunately, the answer is quite a lot –- especially if the incoming signal contains something malicious, like a computer virus or Trojan horse.

And according to the experts, this isn't just idle speculation – the threat is very real. So, just how concerned do we need to be?

To get a better sense of this possibility, I spoke to two experts on the matter: Andrew Siemion, a PhD candidate in astronomy at SETI-Berkeley, and Milan Cirkovic, Senior Research Associate at the Astronomical Observatory of Belgrade and a leading expert on SETI.

We'll get to their answers in just a second, but it's worth doing a quick review to understand where this idea came from –- and not surprisingly, it's science fiction inspired by science.

Visions of viral doom


Science fiction writers have been worried about this possibility ever since the advent of SETI, back in the early 1960's.

Soon after the launch of Frank Drake's Project Ozma in 1960, which was the pioneering attempt to listen for extraterrestrial radio signals, the BBC produced A for Andromeda, a television series that was written by the acclaimed cosmologist and science fiction writer Fred Hoyle. The story concerns a group of scientists who detect a radio signal from a distant galaxy that contains instructions for the design of an advanced computer. The scientists decide to go ahead and build the computer, which in turn produces a new set of instructions for the creation of a living organism, named Andromeda. It's at this point where one of the scientists raises an objection, amid fears that Andromeda's purpose is to subjugate humanity.

In 1968, Stanislaw Lem reprised this issue in his novel His Master's Voice. In the story, scientists work to decode what seems to be a message from outer space, specifically a neutrino signal from the Canis Minor constellation. As the scientists decode the data, they conclude that it is a mathematical description of an object, possibly a molecule or even an entire genome. They go on to construct two strange substances that exhibit odd properties, a glutinous liquid and a solid object that looks like a slab of red meat. They learn that the liquid can cause an atomic blast at a remote location –- which, if used as a weapon, would make deterrence impossible. As a result, many of the scientists become convinced that it's an extraterrestrial weapon of some sort.

And more recently, the idea of receiving instructions from aliens was explored by Carl Sagan in his 1985 novel Contact (which was made into a major motion picture in 1997). But unlike his worrywart sci-fi predecessors, Sagan portrayed aliens as being genuinely friendly.

In Sagan's story, extraterrestrial contact is made, with the aliens transmitting the blueprints to a massive engineering project — supposedly for us to build. After much consideration, the device is constructed, and it turns out to be a transportation device for a single human occupant.

Carl Sagan always held firmly to his belief in benign aliens. He was convinced that any advanced civilization had to be friendly by default — that overly aggressive or misguided aliens would have destroyed themselves prior to advancing to such a stage. His theory suggested that an interstellar selectional effect was happening, and the only advanced aliens left standing would be the good ones.

Be careful


Sagan's optimism notwithstanding, we should probably be more than a little bit wary of receiving a signal from a civilization that's radically more advanced than our own.

When we spoke to SETI-Berkeley's Andrew Siemion, he admitted that SETI is aware of this particular risk, and that they've given the issue some thought. He stressed that SETI's primary objective is just to detect a signal. "Detecting signals is far easier than decoding them," he told me. "Our searches don't attempt to decode or decipher any information content from signals that trigger our algorithms." In other words, the folks at SETI-Berkeley are only concerned with whether or not a signal is present, and whether it's real.

But that doesn't mean they're still not careful. When we asked Siemion about the possibility of inadvertently receiving or downloading a virus, he stressed that the possibility is extraordinarily low, but not impossible.

"Our instruments are connected to computers, and like any computers, they can be reprogrammed," he warned. "Our software receives input that ultimately comes from unknown sources, and again, while this input is never executed or decoded, we don't perform rigorous checks to validate this unknown input like a computer security conscious programmer might do with an internet application."

Siemion speculated that, if an extraterrestrial intelligence had very deep knowledge of the software systems we use for our experiments and the architecture of our computers, they might be able to send a sequence of signals that would cause a memory buffer to overflow and perhaps allow arbitrary code execution.

"However, if ET had this level of knowledge about terrestrial technology," he said, "it would make far more sense to use a similar technique with the thousands of satellite downlink stations dotting the globe, or the billions of cell phone radios constantly listening for a ping from a cellphone tower."

Siemion stressed that this doesn't apply to such projects as SETI@Home and Astropulse, which he said are "thoroughly vetted by very competent computer security professionals, and every effort is made to ensure [their] safety."

In regards to the threat of a Trojan horse, Siemion admitted the possibility, but doubted that humanity would ever blindly follow a set of blueprints or instructions that we received from another intelligent civilization.

"Just as human cultures establish trust over many decades and centuries moving in small steps, humanities' relationship with an extraterrestrial civilization would likely evolve slowly over perhaps many millennia," he told me. "Maybe after many thousands of years, when humanity has established some level of rapport with our cosmic neighbors, we might feel comfortable accepting and utilizing their technology."

Be afraid


Like Siemion, Milan Cirkovic also believes that the risk of acquiring something nasty from an ETI is very real. But he's a bit more worried. Alien invaders won't attack us with their spaceships, he argues — instead, they'll come in the form of pieces of information. And they may be capable of infiltrating and damaging or subverting our computing networks, in a manner that's similar to the computer viruses we're all too familiar with.

Cirkovic admits, however, that the possibility should be taken with a grain of salt. In order to work, an alien virus would have to somehow know or intuit our protocols and operating systems.

"The efficiency of a virus in achieving its malicious task is proportional to the degree of its specialization. More general viruses are, therefore, less efficient," he said. "To be able to infiltrate our networks, the alien virus should be general to a fantastic degree."

When we asked Cirkovic what the purpose of an ET virus might be, he responded, "If we discard anthropocentric malice, it seems that the most probable response is that they have evolved autonomously in a network of an advanced civilization -– which may or may not persist to this day." If this is the case, speculated Cirkovic, these extraterrestrial viruses would probably just replicate themselves and subvert our resources to further transmit themselves across the Galaxy. In other words, the virus may or may not be under the control of any extraterrestrial civilization –- it could be an advanced AI that's out of control and replicating itself by taking over the broadcast capabilities of each civilization it touches. A very frightening thought.

To prevent this, Cirkovic suggests that we should sever any connection between the SETI and METI (messages to ET) equipment, and the rest of the human info-sphere. He admits that this is easier said than done.

Cirkovic's fear is not without warrant — after all, people write viruses here on Earth all the time, for no particular reason. Perhaps signals such as these are the ultimate manifestation of computer viruses — a self-replicating information system that finds compatibility with others, thereby infecting it.

It's clear from our conversations with Siemion and Cirkovic that extraterrestrial life may be more bizarre and dangerous than we can imagine. Should humanity eventually receive a transmission from the depths of space, we would do well to treat it with great caution and consideration.

This article originally appeared at io9.

Top image via x264-bb. Inset images via TechnoFile, Discovery.

February 2, 2013

7 Best-Case Scenarios for the Future of Humanity


Most science fictional and futurist visions of the future tend towards the negative — and for good reason. Our environment is a mess, we have a nasty tendency to misuse technologies, and we're becoming increasingly capable of destroying ourselves. But civilizational demise is by no means guaranteed. Should we find a way to manage the risks and avoid dystopic outcomes, our far future looks astonishingly bright. Here are seven best-case scenarios for the future of humanity.

Above image courtesy Gary Tonge.

Before we get started it's worth noting that many of the scenarios listed here are not mutually exclusive. If things go really well, our civilization will continue to evolve and diversify, leading to many different types of futures.

1. Status quo


While this is hardly the most exciting outcome for humanity, it is still an outcome. Given the dire warnings of Sir Martin Rees, Nick Bostrom, Stephen Hawking, and many others, we may not be around to see the next century. Our ongoing survival — even if it's under our current state of technological development — could be considered a positive outcome. Many have suggested that we've already reached our pinnacle as a species.

Back in 1992, political scientist Francis Fukuyama wrote The End of History and the Last Man in which he argued that our current political, technological, and economic mode was the final stop on our journey. He was wrong, of course; Fukuyama's book will forever be remembered as a neoconservative's wet dream written in reaction to the collapse of the Soviet Union and the rise of the so-called New World Order. More realistically, however, the call for a kind of self-imposed status quo has been articulated by Sun Microsystems cofounder Bill Joy. Writing in his seminal 2004 article, "Why the Future Doesn't Need Us," Joy warned of the catastrophic potential for 21st century technologies like robotics, genetic engineering, and nanotech. Subsequently, he called for technological relinquishment — a kind of neo-Luddism intended to prevent dystopic outcomes and outright human extinction. The prudent thing to do now, argued Joy, is to make do with what we have in hopes of ensuring a long and prosperous future.

2. A bright green Earth


Visions of the far future tend to conjure images of a Cybertron-like Earth, covered from pole-to-pole in steel and oil. It's an environmentalist's worst nightmare — one in which nature has been completely swept aside by the onslaught of technology and the ravages of environmental exploitation. Yet it doesn't have to be this way; the future of our planet could be far more green and verdant than we ever imagined. Emerging branches of futurism, including technogaianism and bright green environmentalism, suggest that we can use technologies to clean up the Earth and create sustainable energy models, and even to transform the planet itself.

An early version of this sentiment was presented via Bruce Sterling's Viridian Design Movement, an aesthetic ideal that advocated for innovative and technological solutions to environmental problems. Looking to the far future, the ultimate expression of these ideas could result in a planet far more lush and ecologically diverse than at any other point in its geological history. In such a future, humans could be re-engineered to live in harmony with the environment. All our energy needs would be completely met (a true and sustainable Kardashev I civilization). Using advanced models as our guide, we could also redesign and overhaul the Earth's ecosystem (including the elimination of predation and animal suffering), There's also the possibility for weather control. And we might finally be able to implement defensive measures to counter the effects of natural disasters (like asteroid impacts, earthquakes, and volcanic eruptions). Given an Earth like this, why would anyone want to leave?

Image: Thomas Cole's The Arcadian or Pastoral State, 1834.

3. Watched over by machines of loving grace


Regrettably, it's very possible that the technological Singularity will be an extinction event. The onset of radically advanced machine intelligence — perhaps as early as 30 years from now — will be so beyond our control and understanding that it will likely do us in, whether it happens deliberately, accidentally, or by our own mismanagement of the process. But the same awesome power that could destroy us could also result in the exact opposite. It's this possibility — that a machine intelligence could create a veritable utopia for humanity — that has given rise to the Singularitarian movement.

If future AI designers can guide and mould the direction of these advanced systems — and most importantly their goal orientation — it's conceivable that we could give rise to what's called ‘friendly AI' — a kind of Asimovian intelligence that's incapable of inflicting any harm. And in fact, it could also serve as a supremely powerful overseer and protector. It's a vision that was best expressed by Richard Brautigan in his poem, "Watched Over By Machines of Loving Grace."

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

4. To boldly go where no one has gone before...


We need to get off this rock and start colonizing other solar systems — there's no question about it. Not only does our ongoing survival depend on it (the ‘all our eggs in one basket problem'), it's also in our nature as a species to move on. Indeed, by venturing beyond our borders and blowing past our biological limitations we have continually pushed our society forward — what has resulted in ongoing technological, social, political, and economic progress. Even today, our limited ventures into space have reaped countless benefits, including satellite technologies, an improved understanding of science — and even the sheer thrill of seeing a high-definition image streamed back from the surface of Mars.

Should our civilization ever be capable of embarking upon interstellar colonization — whether it be through generation ships, self-replicating Von Neumann probes, or an outwardly expanding bubble of digital intelligence, it would represent a remarkable milestone, possibly for all life in the Milky Way. As it stands, we appear to live in a Galaxy devoid of interstellar travelers — a troubling sign that has given rise to the Fermi Paradox. So assuming we can start planet hopping, it might just turn out that we are the first and only civilization to embark upon such a journey. It's something that we must try; the future of life in our Galaxy could depend on it. But more to the point, interstellar colonization would also allow our species to expand into the cosmos and flourish.

5. Inner space, not outer space


Alternatively (or in conjunction with space travel), we could attain an ideal existential mode by uploading ourselves into massive supercomputers.

It's an idea that makes a lot of sense; given the computational capacity of a megascale computer, like a Matrioshka Brain (in which the matter of entire planet is utilized for the purpose of computation) or Dyson Sphere (which can capture the energy output of the sun), there would be more to experience in a simulated universe than in the real one itself. According to Robert Bradbury, a single multi-layer Matrioshka Brain could perform about 1042 operations per second, while Seth Lloyd has theorized about a quantum system that could conceivably calculate 5x1050 logical operations per second carried out on ~1031 bits. Given the kinds of simulated worlds, minds, and experiences this kind of power could generate, the analog world would likely appear agonizingly slow, primitive, and exceptionally boring.

6. Eternal bliss


Virtually every religion fantasizes about a utopian afterlife. This only makes sense given the imperfections and dangers of the real world; religion gives people the opportunity to express their wildest projections of an ideal state of existence. Given our modern materialist proclivities, many of us no longer believe in heaven or anything else awaiting us in some supposed afterlife. But that doesn't mean we can't create a virtual heaven on Earth using our technologies.

This is what the British philosopher David Pearce refers to as the Hedonistic Imperative — the elimination of all suffering and the onset of perpetual pleasure. This could be as simple as eliminating pain and negative emotional states, or something far more dramatic and profound, like maximizing the amount of psychological, emotional, and physical pleasure that a single consciousness can experience. Given that we live in a hostile universe with no meaning other than what we ascribe to it, who's to say that entering into a permanent state of bliss is somehow wrong or immoral? While it may be offensive to our Puritan sensibilities, it most certainly appeals to our spiritual and metaphysical longings. A strong case can be made that maximizing the human capacity for pleasure is as valid a purpose as any other.

7. Cosmological transcension


This is basically a placeholder for those far-off future states we can't possibly imagine — but are desirable nonetheless. While this line of speculation tends to venture into the realms of philosophy and metaphysics (not that many of the other items on this list haven't done the same), it's still interesting and worthwhile to consider some super-speculative possibilities. For example, futurist John Smart has suggested that human civilization is increasingly migrating into smaller and smaller increments of matter, energy, space, and time (MEST). Eventually, he argues, we'll take our collective intelligence into a cosmological realm with the same efficiency and density as a black hole — where we'll essentially escape the universe.

Alternatively, forward-looking thinkers like Robert Lanza and James Gardner have speculated about a universe that's meant to work in tandem with the intelligence it generates. This idea, called biocentrism, suggests that the universe is still in an immature phase, and that at some future point, all the advanced intelligent life within it will guide its ongoing development. This would result in a Universe dramatically different from what we live in today. And then there are other possibilities such as time travel and the exploitation of quantum effects. Indeed, given just how much we don't know about what we don't know, the future may be full of even more radical possibilities than we're currently capable of imagining.

This article originally appeared at io9.

Images: Top | 1 | 2 | 3 | 4 | 5 | 6 | 7

Interview: Journalism, Human Enhancement and the Singularity

I was recently interviewed by Adam Ford while attending the Humanity+ conference in San Francisco.