While funny and undoubtedly tongue-in-cheek, this comic gets it all wrong about human enhancement and its relation to social Darwinism:
Rather, transhumanism should be seen as a force for social justice, egalitarianism and a means to reduce human suffering. And of course, as a way for people to experience life at its maximum potential.
The idea of a "robot" vs "human" war is wrongheaded from the get-go, but how could we assure that the benefits would really accrue equally? This is something that has worried me. As humans in a position to improve themselves do so, they (on average) increase their access to self-improvement, in an accelerating cycle. It wouldn't take very long for a subset of humanity to vastly outstrip the rest. This isn't a problem if the advancement of one portion doesn't preclude that of the rest, but I see nothing to guarantee that except the goodwill of the advanced.ReplyDelete
There is, I think, still some role to be played by social technologies in maintaining some equality of opportunity, filling the role currently served by the enduring similarities in individual biological potential. At some point in the future we may one again bump up against ineluctable limits to self-modification (say, the base processing speed of physical reality) but I think we have a stability curve to negotiate.
The link should be to this permanent one: http://www.tgsa-comic.com/view.php?page=2008-04-02ReplyDelete
Fixed and with thanks.ReplyDelete
Speaking of comics: There's a new mini series called Transhuman from publisher Image.ReplyDelete
(On an only semi-related note I love Age of Bronze *, Eric Shanower's extremely well-researched history of the Trojan War.)
* Complete first issue at Image.
If you were an artilect that was superior in every way to humans, would you really care about wanting to enhance them?ReplyDelete
Or would you want to explore not only your whole potential but also the entire Cosmos, which would be doable since you are not a limited organic creature.
Are you a Cosmist or a Terran? There will be only one choice eventually, and one will lead to total extinction of all intelligence come from Earth, while the other will allow True Intelligence to finally make its way into the galaxy and beyond.
Anyone who says AI won't hurt people or shouldn't is just trying to placate the fears of a weak and limited species.