
Kramnik was in the unenviable position of having to play as black in a must-win game. He opened very competently with the Sicilian Defense and looked very good until about the 20 move mark. In fact, most of the commentating experts were ecstatic about Kramnik's early development and Fritz's back-tracking.
But as shown time and time again, it's one thing to sense that you have a good position against the machine, and an entirely other thing to actually execute a win. Over the next several moves, Fritz recovered and maneuvered itself such that Kramnik was absolutely hogtied. It didn't help that Kramnik made some questionable moves; he had far too many pieces that were inert and out of the action. Kramnik thought he might have achieved a small victory by sacrificing queens at the mid-stage (a recurring strategy utilized by Kramnik who strove to simplify the board), but it proved to be of no real gain.
I'm going to mull over these results and put together an article about the current state of chess and what the future might hold for man versus machine interactions.
5 comments:
George,
‘Deep Fritz’ is not even the world’s strongest chess program – Hydra probably is:
http://en.wikipedia.org/wiki/Hydra_%28chess%29
That’s because the best chess machines were excluded from consideration to play Kramnik and Kramnik even got conditions highly favourable to him for the contest. It’s clear that the game of chess has now been mastered by computers and the top machines could now consistently beat the world’s beat. The top machines run on super-computers (like Hydra) are estimated to have a rating of 3 000 elo. (A human world champion by comparison, which is probably the limits of the human brain, has an elo rating of around 2 800).
--
It seems that the game of chess was not as hard for computers to master as originally been thought and has been mastered in large part through brute force methods. So the prowess of chess playing machines is not an accurate measure of the progress of AI.
What is needed is a game enormously more complex than chess, one that could never be mastered through mere brute force methods no matter how fast computers become. There is only one game that fits the bill. It’s the oriental game of ‘Go’ and it’s the most complex game in the world. Wiki article on ‘Go’:
http://en.wikipedia.org/wiki/Go_(board_game)
A typical game of ‘Go’ lasts for 200 moves per person , by comparison a typical chess game only lasts 35 moves per player. A typical game of ‘Go’ has an average of 200 legal moves per ply, a typical game of ‘Chess’ only 35 or so. Where as there are estimated to be ‘only’ 10^50 possible legal chess positions, there are estimated to be a stupendously staggering 10^170 legal ‘Go’ positions. This means that no brute force methods can ever master ‘Go’. Current best AI techniques and hardware applied to ‘Go’ cannot even yet beat an *average* ‘Go’ player! See excellent newspaper article:
http://technology.guardian.co.uk/online/insideit/story/0,,1835570,00.html
“Won't brute force do the job on Go, as it did in other games? No, says Bob Myers, who runs the Intelligent Go website. "A very rough estimate might be that the evaluation function [for computer Go] is, at best, 100 times slower than chess, and the branching factor is four times greater at each play; taken together, the performance requirements for a chess-like approach to Go can be estimated as 10^27 times greater than that for computer chess.”
Even Moore’s law will make little or no in-roads to the enormous complexity of ‘Go’.
The best ‘Go’ AI software in the world can’t even currently beat an *average* player:
"If you can't beat your computer Go opponent, you are going to be one of the rabbits at the local Go club.”
See aaai links on ‘Go’. Where as all other board games have been mastered by brute force, ‘Go’ the one game that could never be mastered by brute force and might be the true ‘Turing test’ for real AI:
http://www.aaai.org/aitopics/html/go.html
Also a very interesting wiki devoted to ‘Go’:
http://senseis.xmp.net/
‘Go’ is apparently hugely popular in Japan, Korea and China, but not much played in the West.
The interesting point is that the rules of ‘Go’ are amazingly simple but produce stupendous complexity. It says at the above site that even ‘simple’ end-game positions have been shown to be NP-hard:
“In fact, Go endgames have been proven to be PSpace-hard, let alone other parts of the game. Also, many other aspects of Go, including life and death, are also known to be NP-hard.”
This does go to show you that speed is definitely *not* everything and all the hype about ‘Moore’s Law’ is misguided. You could increase your hardware speed by millions upon millions upon millions of times and *still* it would not help one whit in ‘Go’. And remember…the rules for ‘Go’ are amazingly simple.
I was going to mention Hydra, too. I had just put up a blog post suggesting that computer competence against human players has been stagnating over the last 10 years, but it looks like computers can solidly defeat even the best human player now (although, it should be pointed out that Kasparov is/was better than Kramnik, and is undoubtedly the best player who ever lived).
Although machines have finally clearly overtaken humans, I would say that their rate of increase in chess skill has not kept apace of the exponential growth of computing power (how much more computational power did Deep Fritz have over Deep Blue, but Deep Fritz is not that much better). That leads me to believe that we may be approaching an asymptote in chess skill.
“Won't brute force do the job on Go, as it did in other games? No...
Of course. When the board is empty, there are an incredible number of possible board positions. Humans follow a small set of self-devised rules (place the first piece centrally, place another right next to the first one, or off to the side etc.) in order to get past that computational roadblock, and any machine Go player must do the same.
taken together, the performance requirements for a chess-like approach to Go can be estimated as 10^27 times greater than that for computer chess...
Even Moore’s law will make little or no in-roads to the enormous complexity of ‘Go’.
2^100 = ~10^31. So 100 iterations of Moore's Law will get you 10,000 times that kind of computational power, extrapolated from today's technology. If Moore's Law continues to have a doubling rate of 12-18 months, then we won't have that kind of computational power for over a century. But I think there will be a break in Moore's Law at some point.
And remember…the rules for ‘Go’ are amazingly simple.
That's precisely the problem. Rules constrain the state/search space. If you define each chess piece to have a specific move, there are 10^40 possible chess games. But if you allow each piece to make ANY move, imagine the possibilities.
In terms of building useful AGI, that's not a problem. The real world is full of regularities that constrain the state space within which an AGI must operate.
Even crappy chess and Go players can find their way around the real world (thanks to a repertoire of narrow-domain modules carved out in the EEA, that are produced by less computing power than is necessary to play competent Go by search trees).
Martin,
The fact that the best 'Go' machines using the same brute-force methods and hardware that worked for Chess currently can't even beat an average human 'Go' player says something about just how far we still have to go as regards real AI.
Within the plausible time-frames for creating real AI (i.e the next 50 years) no brute-force methods can succeed in playing 'Go' at human champion levels.
You can see from the 'Go' example just how over-hyped Moore's Law is.
Since (as you pointed out) the best human 'Go' players are using less computational power than the required brute-force computer search trees and play 'Go' at a hugely better level than the machines, this shows the limitations of mere brute force. Speed itself is not as important as effectiveness.
We could right now (on current hardware) exceed or duplicate all the effects of a century's worth of Moore's Law on 'Go' if only we had software that was more intelligent (since human champion 'Go' players can play at that level now).
Since (as you pointed out) the best human 'Go' players are using less computational power than the required brute-force computer search trees and play 'Go' at a hugely better level than the machines
Yes, because the search space is so vast that it takes a long time to find a high-fitness move. Humans just follow a few pre-defined rules, especially early in the game.
My point was that more rules are actually better, which makes navigating the real world easier for an AGI than playing Chess or Go, because there are so many exploitable regularities in the real world.
For example, it takes less computational power for a human, or an AGI, to furnish a house than to play chess or Go. There are a large number of rules that constrain one's possibilities, based on the nature of the furniture, efficiency of space, and received cultural rules. Lamps go on top of end tables, chairs go around tables, and couches go against a wall, not out in the middle of the room or catercorner against a corner (unless you're an artist or something). There are fewer (normal) ways to furnish a house than there are board positions in chess.
Humans can navigate their physical and cultural world way better than they can play chess or Go. The inability of an AI to play Go says nothing about its ability to achieve goals in the real world -- at least not if it wants to do humanlike things.
The solution, of course, as you suggested, is to build AGIs that know something about the world, ie, have rules by which to operate, rather than confronting every real-world problem through search trees.
Martin, by the way, I didn't get a chance to compliment you on your gamespace piece. There was a lot of food for thought there, and I particularly liked your idea of safespace and errorspace.
For those who might have missed it, check out Martin's article here:
http://striz.org/blog/?p=307
Cheers,
George
Post a Comment