Showing posts with label singularity institute. Show all posts
Showing posts with label singularity institute. Show all posts

October 31, 2010

Ben Goertzel dismisses Singularity Institute's "Scary Idea"

AI theorist Ben Goertzel has posted an article in which he declares his rejection of the Singularity Institute for Artificial Intelligence's claim that "progressing toward advanced AGI without a design for "provably non-dangerous AGI"...is highly likely to lead to an involuntary end for the human race." Goertzel calls this their "Scary Idea" and attempts to show that the fear is largely overstated.

He breaks the SIAI argument down to four primary points:

  1. If one pulled a random mind from the space of all possible minds, the odds of it being friendly to humans (as opposed to, e.g., utterly ignoring us, and being willing to repurpose our molecules for its own ends) are very low
  2. Human value is fragile as well as complex, so if you create an AGI with a roughly-human-like value system, then this may not be good enough, and it is likely to rapidly diverge into something with little or no respect for human values
  3. "Hard takeoffs" (in which AGIs recursively self-improve and massively increase their intelligence) are fairly likely once AGI reaches a certain level of intelligence; and humans will have little hope of stopping these events
  4. A hard takeoff, unless it starts from an AGI designed in a "provably Friendly" way, is highly likely to lead to an AGI system that doesn't respect the rights of humans to exist
Taking these points into consideration, Goertzel pieces together what he feels is the SIAI's argument:
If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.
Goertzel then expresses his particular concerns with this argument, including SIAI's Eliezer Yudkowsky's suggestion that we can get human values into an AGI system, what he calls Coherent Extrapolated Volition:
...I think this is a very science-fictional and incredibly infeasible idea (though a great SF notion). I've discussed it and proposed some possibly more realistic alternatives in a previous blog post (e.g. a notion called Coherent Aggregated Volition). But my proposed alternatives aren't guaranteed-to-succeed nor neatly formalized.

But setting those worries aside, is the computation-theoretic version of provably safe AI even possible? Could one design an AGI system and prove in advance that, given certain reasonable assumptions about physics and its environment, it would never veer too far from its initial goal (e.g. a formalized version of the goal of treating humans safely, or whatever)?

I very much doubt one can do so, except via designing a fictitious AGI that can't really be implemented because it uses infeasibly much computational resources. My GOLEM design, sketched in this article, seems to me a possible path to a provably safe AGI -- but it's too computationally wasteful to be practically feasible.
Oooh, it looks like we have the makings of a great debate, here. I'll be interested to see if the SIAI retorts and how they address Goertzel's concerns.

October 30, 2010

This Magazine: Technology, ethics, and the real meaning of the “Rapture of the Nerds”

Chris Kim
Keith Norbury of This Magazine has published a piece called Technology, ethics, and the real meaning of the “Rapture of the Nerds”. I was interviewed for this article and asked questions about the state of transhumanism and singularitarianism today in Toronto and Canada in general. We also discussed the the tendency of the press and the public to roll all transhumanists into the Singularity camp, which, as I pointed out, was a mistake:
Not all people who believe in technology’s power to transform humanity are Singularitarians. Transhumanists, as their name implies, also expect technology to alter the species. “These are two communities that seem to have a connection,” says George Dvorsky, president of the Toronto Transhumanist Association. “It doesn’t necessarily mean that one follows the other. I happen to know many transhumanists who don’t buy into the Singularity at all.”

While both groups believe that rapid technological progress will radically reshape our lives, the Singularitarians believe a unified, superhuman intelligence is a necessary part of that change. Transhumanists believe no such super-intelligent entity is necessary. Either way, both believe that our future will be completely unrecognizable. “We are talking about transforming what it means to be human,” Dvorsky says.
The article also goes on to describe how interest in the TTA and local transhumanist chapters has waned in the past several years. I'm rather frustrated by Norbury's angle on this, which is to suggest that the fringe is getting fringier, and that good work isn't being done in these areas through other channels. The fact of the matter is that these ideas, namely the notion of human enhancement and the unknown potential for a greater-than-human artificial intelligence, are being addressed by a diverse and distributed group of individuals—and just as importantly, these ideas are slowly (but surely) being normalized into our daily discourse.

Indeed, organizing local meet-ups are all fine and well, but that's not where the rubber hits the road. I've made a conscious effort over the past few years to devote most of my time and energy to my blog, Humanity+, and the Institute for Ethics and Emerging Technologies where my outreach is considerably greater and more impactful than through a local chapter alone. Annoyingly, Norbury failed to make mention any of these and chose to focus on the TTA and chapter-level organizing which is no longer of any real interest to me.

August 2, 2010

Abou Farman: The Intelligent Universe

Abou Farman has penned a must-read essay about Singularitarianism and modern futurism--even if you don't agree with him and his oft sleight-of-hand dismissives. Dude has clearly done his homework, resulting in provocative and insightful commentary.

Thinkers mentioned in this article include Ray Kurzweil, Eliezer Yudkowsky, Giulio Prisco, Jamais Cascio, Tyler Emerson, Michael Anissimov, Michael Vasser, Bill Joy, Ben Goertzel, Stephen Wolfram and many, many more.

Excerpt:
Images of transhuman and posthuman figures, hybrids and chimeras, robots and nanobots became uncannily real, blurring further the distinction between science and science fiction. Now, no one says a given innovation can’t happen; the naysayers simply argue that it shouldn’t. But if the proliferating future scenarios no longer seem like science fiction, they are not exactly fact either—not yet. They are still stories about the future and they are stories about science, though they can no longer be banished to the bantustans of unlikely sci-fi. In a promise-oriented world of fast-paced technological change, prediction is the new basis of authority.

That is why futurist groups, operating thus far on the margins of cultural conversation, were thrust into the most significant discussions of the twenty-first century: What is biological, what artificial? Who owns life when it’s bred in the lab? Should there be cut off-lines to technological interventions into life itself, into our DNA, our neurological structures, or those of our foodstuffs? What will happen to human rights when the contours of what is human become blurred through technology?

The futurist movement, in a sense, went viral. Bill McKibben’s Enough (2004) faced off against biophysicist Gregory Stock’s Redesigning Humans (2002) on television and around the web. New groups and think tanks formed every day, among them the Foresight Institute and the Extropy Institute. Their general membership started to overlap, as did their boards of directors, with figures like Ray Kurzweil ubiquitous. Heavyweight participants include Eric Drexler—the father of nanotechnology—and MIT giant Marvin Minsky. One organization, the World Transhumanist Association, which broke off from the Extropy in 1998, counts six thousand members, with chapters across the globe.

If the emergence of NBIC and the new culture of prediction galvanized futurists, the members were also united by an obligatory and almost imperial sense of optimism, eschewing the dystopian visions of the eighties and nineties. They also learned the dangers of too much enthusiasm. For example, the Singularity Institute, wary of sounding too religious or rapturous, presents its official version of the future in a deliberately understated tone: “The transformation of civilisation into a genuinely nice place to live could occur, not in a distant million-year future, but within our own lifetimes.”
Link.

June 24, 2010

Singularity Summit 2010: August 14-15

The Singularity Summit for 2010 has been announced and will be held on August 14-15 at the San Francisco Hyatt Regency. Be sure to register soon.

This year's Summit, which is hosted by the Singularity Institute, will focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called 'intelligence amplification' -- the other route to the technological Singularity.

Of particular interest to me will be the talk given by Irene Pepperberg, author of "Alex & Me," who has pushed the frontier of animal intelligence with her research on African Gray Parrots. She will be exploring the ethical and practical implications of non-human intelligence enhancement and of the creation of new intelligent life less powerful than ourselves.

A sampling of the speakers list includes:
  • Ray Kurzweil, inventor, futurist, author of The Singularity is Near
  • James Randi, skeptic-magician, founder of the James Randi Educational Foundation
  • Dr. Anita Goel, a leader in the field of bionanotechnology, Founder & CEO, Nanobiosym, Inc.
  • Dr. Irene Pepperberg, leading investigator of animal intelligence, trainer of the African Grey Parrot "Alex"
  • Prof. Alan Snyder, Director, Centre for the Mind at the University of Sydney, researcher in brain-computer interfaces
  • Prof. Steven Mann, augmented reality pioneer, professor at University of Toronto, "world's first cyborg"
  • Dr. Gregory Stock, bioethicist and biotech entrepreneur, author of Engineering Humans: Our Inevitable Genetic Future
  • Dr. Ellen Haber-Katz, a professor at the Wistar Institute who studies rapid-regenerating mice
  • Joe Z. Tsien, scholar at the Medical College of Georgia, who created a strain of "Doogie Mouse" with twice the memory of average mice
  • Eliezer Yudkowsky, research fellow with the Singularity Institute
  • Michael Vassar, president of the Singularity Institute
  • David Hanson, CEO of Hanson Robotics, creator of the world's most realistic humanoid robots
  • Demis Hassabis, research fellow at the Gatsby Computational Neuroscience Unit at the University of London
From the press release:
Will it be one day become possible to boost human intelligence using brain implants, or create an artificial intelligence smarter than Einstein? In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a "Singularity", saying "From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye". Vinge pointed out that intelligence enhancement could lead to "closing the loop" between intelligence and technology, creating a positive feedback effect.

This August 14-15, hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople will converge in San Francisco to address the Singularity and related issues at the only conference on the topic, the Singularity Summit. Experts in fields including animal intelligence, artificial intelligence, brain-computer interfacing, tissue regeneration, medical ethics, computational neurobiology, augmented reality, and more will share their latest research and explore its implications for the future of humanity.