September 17, 2004

Ben Goertzel's kinds of minds


Artificial intelligence researcher Ben Goertzel -- the guy who I'd give the vote for "most likely to develop artificial general intelligence" -- recently published an article in FrontierNumber4 titled "Kinds of Minds."

Goertzel is part of the Artificial General Intelligence Research Institute, a "small nonprofit organization, whose overall mission is to work toward the creation of powerful, ethically positive Artificial General Intelligence." Specifically, the group is working on the Novamente AI Engine, an in-development software system aimed at the lofty goal of true artificial general intelligence -- "at the human level and beyond."

In the FN4 article, Goertzel describes the various ways in which intelligence and consciousness can plausibly exist -- their embodied nature, the kinds of social interactions involved, telepathy, and so on. He even tackles some interesting quantum consciousness issues.

Describing humans versus conjectured future Novamentes, Goertzel writes,
Given all these ontological categories, we may now position the human mind as being: singly-embodied, singly-body-centered, tool and socially dependent and language-enabled, and conservatively-structured. Furthermore, while it may possibly utilize quantum effects in some way, it is clearly very limited in its ability to do quantum-based reasoning.

The Novamente AI system, is intended to be a somewhat different kind of mind: flexibly embodied, flexibly body-centered, tool and socially dependent, language and telepathy enabled, and radically self-modifying, and potentially fully quantum-enabled. Also, Novamentes are explicitly designed to be formed into a community structured according to the "telepathic BOA mindplex" arrangement.

It might seem wiser, to some, to begin one’s adventures in the AI domain by sticking more closely to the nature of human intelligence. But my view is that some of the limitations imposed by the nature of humans’ physical embodiment pose significant impediments to the development of intelligence, as well as to the development of positive ethics. Single embodiment and the lack of any form of telepathy are profound shortcomings, and there seems absolutely no need to build these shortcomings into our AI systems. Rather, the path to creating highly intelligent software will be shorter and simpler if we make use of the capability digital technology presents for overcoming these limitations of human-style embodiment. And the minds created in this way will lack some of the self-centeredness and parochialism displayed by humans -- much of which, I believe, is rooted precisely in our single-bodiedness and our lack of telepathic interaction.
The article is quite fascinating--one that reminds me of Barry Dainton's "Innocence Lost: Simulation Scenarios: Prospects and Consequences" in which he speculates about different kinds of simulations. I strongly recommend both articles for those with an interest in such futurist speculations.

No comments: