I first met Michael in 2003 during a transhumanist conference at Yale and we've kept in touch ever since. Our visions of the future, both in terms of the risks and benefits, have always been very closely aligned -- whether it be in how we've come to define the perpetually amorphous Technological Singularity or what we envision as being the posthuman condition. We also share similar ethical sensibilities; both of us are outspoken advocates of animal welfare, vegetarianism and the abolitionist imperative.
A bit about Michael:
He is Media Director for the Singularity Institute and Fundraising Director, North America for the Lifeboat Foundation. He is the founder of the Immortality Institute, a grassroots life extension advocacy organization, and does fundraising work for the Singularity Institute for Artificial Intelligence, an organization devoted to research on safe artificial intelligence. He's also a prolific freelance science and technology writer; he has contributed over 1,600 short factual articles to WiseGeek.com, a website that promises “clear answers for common questions”. WiseGeek received over 62 million visitors last year.
Back in 2006, Michael expanded his capacity to hound others about the wonders of technology and its central significance to humanity’s future by starting his blog, Accelerating Future. Since its founding, the blog has received over 5 million visits and been featured on G4.TV’s Attack of the Show and SciFi.com. Posts of Michael’s have appeared on the front page of Digg and Reddit. Since appearing on the scene, Michael has received some light media coverage, being interviewed by Psychology Today in 2008. Since 2003, Michael has given talks on the risks and benefits of emerging technologies at conferences and seminars in San Francisco, Los Angeles, Las Vegas, Palo Alto, and at Yale University.
The topics that Michael likes to focus on are artificial intelligence, molecular nanotechnology, transhumanism, intelligence enhancement and extinction risk. He believes that humanity could rise or fall in the 21st century depending on how we handle these technologies. His views on the dangers of advanced technology are very similar to those presented by Bill Joy in his WIRED article, “Why the future doesn’t need us”, except Michael believes that relinquishment is infeasible. Instead, he advocates selective development — the acceleration of beneficial technologies (especially intelligence enhancement and Friendly AI) so that their problem-solving capabilities can be brought to bear on the risks of other advanced technologies.
Michael is allied with the Institute for Ethics and Emerging Technologies, to which he often contributes articles. The IEET’s mission is to become a center for voices arguing for a responsible, constructive approach to emerging human enhancement technologies. The organization believes that technological progress can be a catalyst for positive human development so long as we ensure that technologies are safe and equitably distributed. The basic idea is that human enhancement technology should be cautiously embraced rather than ignored or rejected.
Michael has been a consultant for a variety of future-oriented non-profit organizations and for-profit companies including the Methuselah Foundation, Center for Responsible Nanotechnology, and Kurzweil Technologies. On behalf of the Lifeboat Foundation and the Singularity Institute, Michael has has consulted for organizations such as the US Navy and spoken informally with reporters at outlets such as The New York Times, New Scientist, Financial Times, The Guardian, and the offices of US senators and congressmen.
Look for Michael to blog throughout the month of June. Should make for a fascinating exchange of ideas.
What does Michael Anissimov do for money?
I work for SIAI and Lifeboat and do consulting.
Post a Comment