August 16, 2009

The Real Way to Feel Safe with Artificial Intelligence

Cross-posted at http://davidbrin.blogspot.com/... anyone is welcome to join discussion there....


=====

Sorry to have posted so little, of late.  We have been ensnared by a huge and complex Eagle Scout Project here... plus another kid making Black Belt, and yet another at Screenwriting camp... then the first one showing me endless online photos of "cars it would be cool to buy..."

And so, clearing my deck of topics to rant about, I'd like to post quickly this rumination on giving rights to artificial intelligences.  Bruce Sterling has lately raised this perennial issue, as did Mike Treder in an excellent piece suggesting that our initial attitudes toward such creatures may color the entire outcome of a purported "technological singularity."


The Real Reason to Ensure AI Rights

No issue is of greater importance than ensuring that our new, quasi-intelligent creations are raised properly.  While oversimplifying terribly, Hollywood visions of future machine intelligence range from TERMINATOR-like madness to admirable traits portrayed in movies like AI or in the BICENTENNIAL MAN.  

I've spoken elsewhere of one great irony -- that there is nothing new about this endeavor.  That every human generation embarks upon a similar exercise -- creating new entities that start out less intelligent and virtually helpless, but gradually transform into beings that are stronger, more capable, and sometimes more brilliant than their parents can imagine.

The difference between this older style of parenthood and the New Creation is not only that we are attempting to do all of the design de novo, with very little help from nature or evolution, but also that the pace is speeding up. It may even accelerate, once semi-intelligent computers assist in fashioning new and better successors.  

Humanity is used to the older method, in which each next generation reliably includes many who rise up, better than their ancestors... while many others sink lower, even into depravity.  It all sort of balanced out (amid great pain), but henceforth we cannot afford such haphazard ratios,  from either our traditional-organic heirs or their cybernetic creche-mates.

I agree that our near-future politics and social norms will powerfully affect what kind of "singularity" transformation we'll get -- ranging from the dismal fears of Bill Joy and Ted Kaczynski to the fizzing fantasies of Ray Kurzweil.  But first, let me say it's not the surface politics of our useless, almost-meaningless so-called Left-vs-Right axis. Nor will it be primarily a matter of allocation of taxed resources. Except for investments in science and education and infrastructure, those are not where the main action will be.  They will not determine the difference between "good" and "bad" transcendence.  Between THE MATRIX  and, say, FOUNDATION'S TRIUMPH.

No, what I figure will be the determining issue is this.  Shall we maintain momentum and fealty to the underlying concepts of the Western Enlightenment? Concepts that run even deeper than democracy or the principle of equal rights, because they form the underlying, pragmatic basis for our entire renaissance.


Going With What Has Already Worked

These are, I believe, the pillars of our civilization -- the reasons that we have accomplished so much more than any other, and why we may even succeed in doing it right, when we create Neo-Humanity.

1.  We acknowledge that individual human beings  -- and also, presumably, the expected caste of neo-humans -- are inherently flawed in their subjectively biased views of the world.  

In other words...  we are all delusional! Even the very best of us.  Even (despite all their protestations to the contrary) all leaders.  And even (especially) those of you out there who believe that you have it all sussed.

This is crucial. Six thousand years of history show this to be the one towering fact of human nature.  Our combination of delusion and denial is the core predicament that stymied our creative, problem-solving abilities, delaying the great flowering that we're now part-of.  

These dismal traits still erupt everywhere, in all of us.  Moreover, it is especially important to assume that delusion and denial will arise, inevitably, in the new intelligent entities that we're about to create.  If we are wise parents, we will teach them to say what all good scientists are schooled to say, repeatedly: "I might be mistaken."  But that, alone, is not enough.

2.  There is a solution to this curse, but it is not at all the one what was recommended by Plato, or any of the other great sages of the past.  

Oh, they knew all about about the delusion problem, of course.  See Plato's "allegory of the cave," or the sayings of Buddha, or any of a myriad other sage critiques of fallible human subjectivity.  These savants were correct to point at the core problem... only then, each of them claimed that it could be solved by following their exact prescription for Right Thinking. And followers bought in, reciting or following the incantations and flattering themselves that they had a path that freed them of error.

Painfully, at great cost, we have learned that there is no such prescription. Alack, the net sum of "wisdom" that those prophets all offered only wound up fostering even more delusion.  It turns out that nothing -- no method or palliative applied by a single human mind, upon itself -- will ever accomplish the objective.  

Oh, sure, logic and reason and sound habits of scientifically-informed self-doubt can help a lot.  They may cut the error rate in half, or even by a factor of a hundred!  Nevertheless, you and I are still delusional twits.  We always will be!  It is inherent.  Live with it.  Our ancestors had to live with the consequences of this inherent human curse.

Ah, but things turned out not to be hopeless, after all!  For, eventually, the Enlightenment offered a completely different way to deal with this perennial dilemma.  We (and presumably our neo-human creations) can be forced to notice, acknowledge, and sometimes even correct our favorite delusions, through one trick that lies at the heart of every Enlightenment innovation -- the processes called Reciprocal Accountability (RA).  

In order to overcome denial and delusion, the Enlightenment tried something unprecedented -- doing without the gurus and sages and kings and priests.  Instead, it nurtured competitive systems in markets, democracy, science and courts, through which back and forth criticism is encouraged to flow, detecting many errors and allowing many innovations to improve.  Oh, competition isn't everything! Cooperation and generosity and ideals are clearly important parts of the process, too. But ingrained reciprocality of criticism -- inescapable by any leader -- is the core innovation.

3.  These systems -- including "checks and balances" exemplified in the U.S. Constitution -- help to prevent the kind of sole-sourcing of power, not only by old-fashioned human tyrants, but also the kind of oppression that we all fear might happen, if the Singularity were to run away, controlled by just one or a few mega-machine-minds. The nightmare scenarios portrayed in The Matrix, Terminator, or the Asimov universe.


The Way to Ensure AI is Both Sane and Wise

How can we ever feel safe, in a near future dominated by powerful artificial intelligences that far outstrip our own? What force or power could possibly keep such a being, or beings, accountable?  

Um, by now, isn't it obvious?

The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board.  

This outcome -- almost never portrayed in fiction --  would strike us as inherently more likely to be safe and successful.  After all, isn't it today's situation?  The vast majority of citizens do not understand arcane matters of science or policy or finance.  They watch the wrangling among alphas and are reassured to see them applying accountability upon each other.... a reassurance that was betrayed by recent attempts to draw clouds of secrecy across all of our deliberative processes.  

Sure, it is profoundly imperfect, and fickle citizens can be swayed by mogul-controlled media to apply their votes in unwise directions.  We sigh and shake our heads... as future AI Leaders will moan in near-despair over organic-human sovereignty.  But, if they are truly wise, they'll continue this compact.  Because the most far-seeing among them will recognize that "I might be wrong" is still the greatest thing than any mind can say.  And that we reciprocal criticism is even better.

Alas, even those who want to keep our values strong, heading into the Singularity Age, seldom parse it down to this fundamental level.  They talk - for example - about giving AI "rights" in purely moral terms...  or perhaps to placate them and prevent them from rebelling and squashing us.

But the real reason to do this is far more pragmatic.  If the new AIs feel vested in a civilization that considers them "human" then they may engage in our give and take process of shining light upon delusion. Each others delusions, above all.

 Reciprocal accountability -- extrapolated to a higher level -- may thus maintain the core innovation of our civilization.  It's central and vital insight.

And thus, we may find that our new leaders -- our godlike grandchildren -- will still care about us... and keep trying to explain.

-

34 comments:

  1. "The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board."

    What would be wrong with having just one powerful AI [a singleton] that was carefully programmed to achieve our true goals?

    You seem to be anthopomorphizing all artificial minds, generalizing human specific traits to minds that are not at all human.

    read: Humans in Funny Suits

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. In fact, better still, see

    Anthropomorphism

    on the LessWrong wiki.

    ReplyDelete
  4. Roko your suggestion has been tried in every single human civilization and it has utterly always failed. Godel's proof shows that no system can internally note all of its own implications and errors.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. @David Brin: Firstly, humans are a very specific kind of mind with certain failure modes such as being corrupted by power. There are many possible intelligent synthetic minds, most of which do not share the very specific weaknesses of human minds. The failure of humans N times over at benevolent dictatorship doesn't mean that every single possible kind of AI would fail that way too.

    To argue that all possible synthetic minds will fall prey to human corruptibility is to anthropomorphize nonhuman minds which are not at all like humans, rather like the difference between this anthopomorphism and the corresponding reality.

    Regarding Godel's incompleteness theorem, it is not obvious to me that it applies here.

    "Godel's proof shows that no system can internally note all of its own implications and errors."

    this is not what Godel actually proved.

    Gödel's first incompleteness theorem shows that any formal system that includes enough of the theory of the natural numbers is incomplete: there are statements in its language that it can neither prove nor refute


    If you want to apply Godel's theorem, you first have to point to a formal deductive system. A computer program interacting with the world is not a formal deductive system - at least I cannot see how to make it into one.

    ReplyDelete
  7. @Roko

    I believe one of the sayings in evolutionary biology is "Evolution is always smarter than you." Humans have turned out the way we have for a wide variety of reasons, and every time we've thought we "fixed" a problem, it's been something of a disaster. What evidence do we have that any superintelligence could be programmed in the way you blithely claim is plausible? None. Perhaps some superintelligence could do such a thing, but I'm not aware of any other serious thinker in modern AI who thinks we could do so.

    ReplyDelete
  8. @Nato: "What evidence do we have that any superintelligence could be programmed in the way you blithely claim is plausible? None. Perhaps some superintelligence could do such a thing, but I'm not aware of any other serious thinker in modern AI who thinks we could do so."

    Well, you don't really have any evidence against the proposition either (the stuff you said about correcting problems in humans seems neither here nor there - we don't really want to extrapolate from humans to nonhuman AI - right?).

    The thing to do is to think about going out and doing some (careful) experimentation to gather data or perform theoretical analyses which might shed light on the issue.

    In cases where the answer is crucially important but you have little evidence either way, one should go and collect some more evidence.

    ReplyDelete
  9. One key point: in an ideal, and very possible future, WE will be the mega-intelligences.

    Those among us with the wealth or position will meld with smart technology through tweaking our neurology, using gene therapy and drugs, or simply with highly advanced computers that help us in much the same way cell phones and laptops do now. By the time we have highly intelligent AGI, we will have the capability to fully map out human neurology and probably will have done it, so we will be working not only on smarter machines, but on smarter selves.

    So WE will be the super intelligent cyborgs with the power to create and solve all of our problems. Which requires a whole different set of regulations.

    Thoughts?

    ReplyDelete
  10. Making a human brain a lot smarter is problematic. Our neurons are limited to 10Hz, compared to 10,000,000,000Hz for silicon. Our brain has to fit in our head, a supercomputer doesn't.

    Also, if you keep tweaking someone's brain, it is not obvious that they will stay sane and stay the "same person".

    Lastly, once we have enhanced humans, the first thing that they are likely to do is build a silicon AI, because that would be the best way of achieving their ends, given that one will probably get diminishing returns on further brain enhancement.

    ReplyDelete
  11. Why do our brains have to fit in our heads? This is the future we're talking about!

    Also, there's no reason enhancements would have to be limited to biological changes. I see people hooking themselves up, perhaps wirelessly, to silicon machines.

    If I was an enhanced human, I would want to create a vast AI that I could be a part of.

    ReplyDelete
  12. @Roko

    My first comment was sort of a quickie response, so I'll be happy to expand on the evidence matter. First, the only example of a working, contextually robust general purpose intelligence we have is humanity (and, to a lesser extent, our biological relatives). We also have examples of programmed intelligence, but none of them are very general purpose nor contextually robust. In fact, they are appallingly simple, relative to self-assembling systems like minds, and attempts to reproduce human capabilities through such programmatic approaches have never approached success. Once we wend our way into self-assembling intelligence systems that have a plausible shot at matching humans' generality and contextuality, we're quickly entering the realm of "do whatever the brain does, because it at least works." I can't prove that singleton AIs are impossible, of course, but neither can I disprove that the world was made last Thursday, so that's not meaningful. Instead, I would say that all attempts to date to do so have led us to believe that the challenges in that direction quickly become intractable.

    I'm very sure that many limits on human cognition are purely artifacts of evolutionary contingencies and attendant biological constraints. At the same time, however, there seems to be more and more evidence to suggest that many of the so-called 'weaknesses' of human cognition (e.g. forgetting) are in fact critical to success. We have plenty of evidence to suggest that the information topology of digital intelligences will converge to be very similar to that of biological intelligences. If you'd like citations, I'm sure I could come up with a few.

    ReplyDelete
  13. As a side note - plenty of simulations start with populations of mutable agents with comparable goals. Once the primary determinant of success becomes each other rather than the vagaries of the environment, however, the same regularities of behavior fall out over and over because of the mathematics of game theory, which are non-contingent. Thus, other minds that deal with other minds can reasonably be expected to be substantially like ours because hey, they evolved to deal with the same game theoretical challenges!

    ReplyDelete
  14. "We have plenty of evidence to suggest that the information topology of digital intelligences will converge to be very similar to that of biological intelligences."

    We have plenty of evidence that human-like minds will be the output of specific kinds of evolutionary processes (the one which has historically played out on this planet). To say that all points within the vastly larger space of possible digital-computer based intelligences will converge toward the same solutions which evolution coughed up is to make an unwarranted generalization.

    ReplyDelete
  15. @DjAdvance: "If I was an enhanced human, I would want to create a vast AI that I could be a part of."

    Right, but then you had better create the right kind of "vast AI". Many AIs would look at your innefficient little meat-brain and re-use its atoms in a more efficient way, for example by deconstructing you and re-using the atoms to make some more optical/quantum computing elements. They would not necessarily care that this results in your death. Caring about you is not a common characteristic of AIs.

    ReplyDelete
  16. @Nato: "neither can I disprove that the world was made last Thursday, so that's not meaningful. Instead, I would say that all attempts to date to do so have led us to believe that the challenges in that direction quickly become intractable."

    We have been working seriously on AGI for 50 years. My personal opinion as an AI researcher is that a lot of that work has been of poor quality because of human emotion and politics within AI, and because it is a hard problem that requires a combination of philosophical analysis and scientific exploration.

    In any case, if you look at the history of science, you see that 50 years is too short a time for the solution of a fundamental problem. Physics, chemistry, biology all took longer than that. For example, aristotle's first work on classifying animals to the construction of the first manmade life form is likely to be about 2300 years; of course most of that 2300 is the dark ages, but even taking that into account I would still say hundreds of years were required.

    ReplyDelete
  17. "To say that all points within the vastly larger space of possible digital-computer based intelligences will converge toward the same solutions which evolution coughed up is to make an unwarranted generalization."

    There is, of course, a difficulty in discussing abstractions like "the vastly larger space of possible digital-computer based intelligences" because, freed from the constraints of the much smaller space plausibly addressable from our current location, we can quickly get into definitional quandaries regarding just what we mean by 'intelligence' or even 'computer'. I presume for something to count as an intelligence, it would have to 'care' about something we care about, for example. Roko notes that "Caring about you is not a common characteristic of AIs", but nonetheless assumes that AIs do at least care about efficiency*. If AIs didn't care about something recognizable, how would we distinguish them from operation of the Universe at large? From a certain perspective bacteria are super-intelligent beings whose interests focus on the reproduction of DNA. One may of may not be able to defend against the assertion that the Moon is a super-intelligent being focused on orbiting the Earth in its exact pattern. A nuclear explosion is a wonderfully complex system of distributed interactions that performs a huge array of 'tasks' simultaneously. Needless to say, I am skeptical that one can achieve a meaningful definition of intelligence without importing a base set of human heuristics.

    But even then, one can imagine all sorts of different topologies that would result in a robust approach to the sorts of things humans find interesting - at least, if one sets a low bar for what counts as imagining such a thing. I can 'imagine' faster than light normal-space travel as long as I don't contemplate it in detail, but that doesn't tell me anything about what is, in fact, possible. It's becoming indisputable even for dualists that parts of the function calculated by human neurons can be compressed without meaningful loss. We can also build computers these days with computational horsepower sufficient to recapitulate all the calculations of a human brain in real time. We use such machines to recapitulate very different systems, such as nuclear explosions (which, for the purposes of this discussion, I will not consider a successful implementation of nuclear artificial intelligence). So, we have very modest computers that can perform some human tasks like grammar checking (in an extremely constrained context) using very different kinds of calculations, we have more advanced computers who use a mix of human-style and rule-based AI to play an acceptable game of ping-pong, and we have supercomputers that model things quite inconceivable to us.


    Oops, time to go. I'll try to come back and continue this later
    *Interestingly, it is also not a 'common' characteristic of humans to care about you. This is true not just in the sense of it not being a characteristic common to all humans, but it's also not commonly true, since the vast majority of humans don't know you. Caring about efficiency probably is common to all intelligences - even a necessary feature of intelligence - but perhaps one could object that this is an unwarranted generalization, if one wanted to.

    ReplyDelete
  18. "If AIs didn't care about something recognizable, how would we distinguish them from operation of the Universe at large?

    Do you mean "recognizable within the framework of human values" or "recognizable as the activity of a powerful cross-domain optimization process (intelligence)?" Roko points out above that those two cannot be expected to have any overlap unless we specifically make it so; hence this whole concern over how to *correctly* build Friendly AI, and not just how to build a sofware-based general intelligence.

    "From a certain perspective bacteria are super-intelligent beings whose interests focus on the reproduction of DNA."

    Bacteria are somewhat powerful in the sense of having optimization power, but they rank far lower on that scale than, say, chimps or humans. See Yudkowsky's 'Efficient Cross-Domain Optimization' for a definition of "intelligence" which (I think) most people seriously interested in Friendly AI consider to be adequate for cutting through most of the confusion.

    ReplyDelete
  19. @Kazuo
    Quickie response to the whole idea of optimization-as-an-intelligence-measure: Optimization for *what*? If some kind of anthropomorphic assumptions are inadmissible, then we could rarely if ever decide what was optimized and what wasn't.

    Backing up a bit, my general point is that solving interesting problems is what qualifies something as intelligent in an interesting way. That collapses the "space of possible digital-computer based intelligences" to something much more likely to share information processing topology with existing solutions. The more similar the problem set, the more similar the probable approaches.

    ReplyDelete
  20. @Nato:

    We could define a superintelligence to be a computer program which, when run on a computer with some reasonable set of actuators and sensors (those possessed by a normal computer will do), has the ability to substantially alter the universe in a way that is concordant with its goals, for example by quickly killing off the human race, building a 100 mile high gold statue of Elvis, etc.

    Have you read Shane Legg's thesis?

    ReplyDelete
  21. We can recognize that vast physical change has happened without that change having to be concordant with our values. For example, the classic AI that turns the entire universe into little paperclip-shaped pieces of matter is clearly causing vast physical change, clearly having an "effect", even though we find its actions morally reprehensible.

    ReplyDelete
  22. Roko,

    I am more confident than ever that the paper-clipper monster is a myth. I think the strict distinction between values (preferences) and decision options in standard decision theory will fail.

    The reason I think so is now much clearer to me - namely, an AI needs a way to set the priors to act in real-time, but this relies on our intuition (values, preferences). Specifically, I believe setting the priors relies on our aesthetic preferences.

    We have nothing to worry about. Although I agree its true that AIs won't be ethical automatically, AIs with the wrong values will be limited. (they won't be able to set the priors correctly which will cause real-time computational intractibiliy for them). The 'aspie' paper-clipper will never get off the ground.

    I may be proved right far sooner than I thought, look at the 'timeless decision theory' being talked about on 'Less Wrong' its starting to hint at universal values (it's talking about decisions on the platonic level)...not there yet, but it's sufficiently intriguing that I think there's a chance I'll turn out to be right.

    ReplyDelete
  23. Legg is the Machine Super Intelligence fellow, correct? I haven't read his thesis, no, though AFAIK its treatment effectively brackets the whole idea of likely topologies in the Universe-as-it-is. Further, my understanding is that he doesn't address the problems of control and self control. I don't mean this as a criticism of his work (with which I'm only passingly familiar); I just mean that so far as I can tell, Legg's thesis is orthogonal to my point.

    ReplyDelete
  24. I was going to continue my earlier piece, but after rereading the thread, I wonder if perhaps I misconstrued your position somewhat and thus expressed my skepticism in a misleading way. So, I'll sort of start over:

    I do not think there are any good reasons to believe that we will be able to program - in the sense currently common for digital computers - a superintelligent AI. Despite the attractions of such mathematical idealizations, the best modern AIs on the really interesting multi-modal problems look ever more like their biological cousins and it's reasonable to propose that we will continue to discover that the "biological" approach is less contingent than one would assume, given the highly contingent nature of the evolutionary processes that 'discovered' the approach. It may just be that we as engineers are constitutionally incapable of writing the intricate, elegant and efficient (propositional?) logic of a tractable alternative general intelligence, but that still means that we're not likely to achieve such a thing on our own. Whether the barriers to that kind of intelligence are merely practical or more fundamental is perhaps prejudging the matter and widening the scope of dispute prematurely.

    That said, it's an interesting dispute in its own right, influencing where we might get the most payoff from research.

    ReplyDelete
  25. Just to counter Roko's point, what's happening in the universe now is not the result of super-intelligences (or at least we don't think so) all examples of AIs so far are domain-specific not general intelligence. It's just a rhetorical trick to call more abstrast things like physics, evolution, etc 'intelligences' (if they are, it's only in a narrow sense, they are not general intelligence).

    I repeat, without a way to set priors correctly, a AI will quickly be stopped by computational intractibility. And you can't set those priors correctly without consciousnes and intuition (aesthetic preferences).

    ReplyDelete
  26. @Geddes: What about just using a uniform prior? Or a complexity prior? These seem to have nothing to do with aestheitics.

    @Nato: what you say about biologically inspired AI may be correct, but not all biological systems share human values. In fact, most don't.

    ReplyDelete
  27. @Roko
    Most biological systems are not designed to operate in conditions similar to that of humans. Recent ethological research has suggested dogs have a sense of fairness which makes sense given that they are, like humans, agents that evolved to deal with other agents*. AIs designed to deal with other agents would probably be most successful if they, too had a sense of fairness. It's not theoretically neat, of course, but it does seem inductively supportable.

    *The common ancestor of dogs and humans is actually more distant than that between humans and mice, so this would constitute an independent path to the same 'value'.

    ReplyDelete
  28. > AIs designed to deal with other agents would probably be most successful if they, too had a sense of fairness.

    False in general. May be true if those other agents are of roughly equal power.

    And my main concern is not carefully designed AGIs, it is carelessly designed ones that self-modify in unanticipated ways.

    Have you read Omohundro's "Basic AI Drives"?

    ReplyDelete
  29. @Roko What about just using a uniform prior? Or a complexity prior? These seem to have nothing to do with aesthetics.

    Uniform priors might work for simple problems, but increase the complexity and they’ll fail. Formal solutions for complexity priors that actually work (i.e. formalizations of Occam’s razor) are all uncomputable or intractable – in other words, they’re no solutions at all ;)

    I predict this problem will only be solved with a sentient system, which can use conscious intuition to determine the correct approximations (i.e. aesthetic preferences).

    Remember how I said I was really interested in Schidhuber’s Theory of Beauty? The reason is because there’s a close connection between Occam’s razor, subjective beauty and priors.

    http://www.idsia.ch/~juergen/beauty.html

    Schmidhuber's Beauty Postulate (1994-2006): Among several patterns classified as "comparable" by some subjective observer, the subjectively most beautiful is the one with the simplest (shortest) description, given the observer's particular method for encoding and memorizing it. See refs [1-5].


    This is a strong clue suggesting aesthetic preferences as the means of setting priors.

    Actually I suspect that as you grow more intelligent, ordinary Bayesian updating matters less and less and setting priors matters more and more; that’s because the more intelligent you get, the more information is coming in, leading to a computational explosion in possible correlations, you need more and more to be able to cut down fruitless reasoning avenues, you have to rely on priors more and more.

    I also strongly suspect that analogy formation is critical for deploying the intuition needed to set priors, and think it’s the core of rationality, whereas Yudkowskyian fan boys have focused on Bayesian updating rather than the problem of priors.

    The irony is that by the time you get to super-intelligence, Bayesian updating aka Yudkowsky is probably largely irrelevant, and setting the priors aka Geddes is almost everything aka non-Bayesian conscious intuition – i.e analogy formation –which is my base of rationality. LOL

    ReplyDelete
  30. @Marc: a great way be continually wrong is to pick someone clever like yudkowsky, and always claim the opposite of what he does, because you feel that you are "in competition" with him. I think that you have fallen into that trap.

    ReplyDelete
  31. I sort of with Marc on this in that you can't just throw a bunch of heuristic placeholders in a bowl and expect them to update their way to intelligence in any complex task, but neither do I see how Marc's simplicity/beauty response solves the problem either. "Setting priors" seems important, of course, but once again I think it's useful to see how the problem has been solved before: DNA sets gross topology in which to arrange heuristic blanks in such a way that the combinatorial explosion of possible correlations is limited, and effective updating can occur. Is that the only way to do it? I suspect some version of that approach would always be necessary. That said, arbitrary biological resource and spatial limitations have heavily influenced the trajectory of prior/updating importance, so it may well be that a digital computer could spend 15 years developing at toddler-like rates, and so arriving at a superintelligence of which biological brains are incapable, but we'll see.

    ReplyDelete
  32. @Roko,

    A great way to never achieve anything original is simply to believe whatever some authority tells you.

    I'm not making stuff up at random , all my rival ideas trace back to a single issue, reductionism versus non-reductionism. Methinks the SIAI school of thought has placed too much faith in the reductionist position, whereas non-reductionism is still a viable position, I recommend you read the works of Bohm, who I regard as the champion of a non-reductionist position.

    In practical AI terms, what the reductionist/non-reductionist debate boils down to is whether all reasoning is reducible to Bayesian inference or not.

    The key issue is this: Can computable priors be derived from Bayesian inference or is something non-Bayesian going on?

    ReplyDelete
  33. The substrate of intelligence could be 100% Bayesian updating and the problem wouldn't be solved, any more than knowing that a function is Turing computable tells you how to compute it, or even if it is computationally tractable.

    ReplyDelete
  34. Congrats to the gents on Eagle Scout project, Black Belt-itude and screenwriting camp! (Article was great, too.)

    ReplyDelete

Note: Only a member of this blog may post a comment.