December 5, 2006

Deep Fritz crushes Kramnik in Game 6, wins tournament

Vladimir Kramnik, the reigning world champion, was soundly defeated today by Deep Fritz v.10 in the final game of the RAG tournament. Kramnik, who was getting increasingly boxed in, tendered his resignation at the 47th move. The win gave Fritz the championship title, who had two wins and 4 draws against its human opponent. Kramnik never came close to defeating Fritz in any of the games. By virtue of this victory, we have to assume that Fritz 10 is the best chess playing entity on the planet.

Kramnik was in the unenviable position of having to play as black in a must-win game. He opened very competently with the Sicilian Defense and looked very good until about the 20 move mark. In fact, most of the commentating experts were ecstatic about Kramnik's early development and Fritz's back-tracking.

But as shown time and time again, it's one thing to sense that you have a good position against the machine, and an entirely other thing to actually execute a win. Over the next several moves, Fritz recovered and maneuvered itself such that Kramnik was absolutely hogtied. It didn't help that Kramnik made some questionable moves; he had far too many pieces that were inert and out of the action. Kramnik thought he might have achieved a small victory by sacrificing queens at the mid-stage (a recurring strategy utilized by Kramnik who strove to simplify the board), but it proved to be of no real gain.

I'm going to mull over these results and put together an article about the current state of chess and what the future might hold for man versus machine interactions.


Anonymous said...


‘Deep Fritz’ is not even the world’s strongest chess program – Hydra probably is:

That’s because the best chess machines were excluded from consideration to play Kramnik and Kramnik even got conditions highly favourable to him for the contest. It’s clear that the game of chess has now been mastered by computers and the top machines could now consistently beat the world’s beat. The top machines run on super-computers (like Hydra) are estimated to have a rating of 3 000 elo. (A human world champion by comparison, which is probably the limits of the human brain, has an elo rating of around 2 800).


It seems that the game of chess was not as hard for computers to master as originally been thought and has been mastered in large part through brute force methods. So the prowess of chess playing machines is not an accurate measure of the progress of AI.

What is needed is a game enormously more complex than chess, one that could never be mastered through mere brute force methods no matter how fast computers become. There is only one game that fits the bill. It’s the oriental game of ‘Go’ and it’s the most complex game in the world. Wiki article on ‘Go’:

A typical game of ‘Go’ lasts for 200 moves per person , by comparison a typical chess game only lasts 35 moves per player. A typical game of ‘Go’ has an average of 200 legal moves per ply, a typical game of ‘Chess’ only 35 or so. Where as there are estimated to be ‘only’ 10^50 possible legal chess positions, there are estimated to be a stupendously staggering 10^170 legal ‘Go’ positions. This means that no brute force methods can ever master ‘Go’. Current best AI techniques and hardware applied to ‘Go’ cannot even yet beat an *average* ‘Go’ player! See excellent newspaper article:,,1835570,00.html

“Won't brute force do the job on Go, as it did in other games? No, says Bob Myers, who runs the Intelligent Go website. "A very rough estimate might be that the evaluation function [for computer Go] is, at best, 100 times slower than chess, and the branching factor is four times greater at each play; taken together, the performance requirements for a chess-like approach to Go can be estimated as 10^27 times greater than that for computer chess.”

Even Moore’s law will make little or no in-roads to the enormous complexity of ‘Go’.

The best ‘Go’ AI software in the world can’t even currently beat an *average* player:

"If you can't beat your computer Go opponent, you are going to be one of the rabbits at the local Go club.”

See aaai links on ‘Go’. Where as all other board games have been mastered by brute force, ‘Go’ the one game that could never be mastered by brute force and might be the true ‘Turing test’ for real AI:

Also a very interesting wiki devoted to ‘Go’:

‘Go’ is apparently hugely popular in Japan, Korea and China, but not much played in the West.

The interesting point is that the rules of ‘Go’ are amazingly simple but produce stupendous complexity. It says at the above site that even ‘simple’ end-game positions have been shown to be NP-hard:

“In fact, Go endgames have been proven to be PSpace-hard, let alone other parts of the game. Also, many other aspects of Go, including life and death, are also known to be NP-hard.”

This does go to show you that speed is definitely *not* everything and all the hype about ‘Moore’s Law’ is misguided. You could increase your hardware speed by millions upon millions upon millions of times and *still* it would not help one whit in ‘Go’. And remember…the rules for ‘Go’ are amazingly simple.

Anonymous said...


The fact that the best 'Go' machines using the same brute-force methods and hardware that worked for Chess currently can't even beat an average human 'Go' player says something about just how far we still have to go as regards real AI.

Within the plausible time-frames for creating real AI (i.e the next 50 years) no brute-force methods can succeed in playing 'Go' at human champion levels.

You can see from the 'Go' example just how over-hyped Moore's Law is.

Since (as you pointed out) the best human 'Go' players are using less computational power than the required brute-force computer search trees and play 'Go' at a hugely better level than the machines, this shows the limitations of mere brute force. Speed itself is not as important as effectiveness.

We could right now (on current hardware) exceed or duplicate all the effects of a century's worth of Moore's Law on 'Go' if only we had software that was more intelligent (since human champion 'Go' players can play at that level now).

George said...

Martin, by the way, I didn't get a chance to compliment you on your gamespace piece. There was a lot of food for thought there, and I particularly liked your idea of safespace and errorspace.

For those who might have missed it, check out Martin's article here: