A computer taught itself to play chess at master level in just three days

Deep Blue, the famous chess-playing computer designed by IBM, was developed by computer scientists and AI experts over the course of years. But now, a new machine, called “Giraffe,” has reportedly mastered chess in just three days, and is good enough to beat 98 percent of international tournament chess players.

Giraffe, which is described in a new paper uploaded to the pre-print site Arxiv, is the brainchild of researchers at Imperial College London. They set out to create a system that plays chess in more human-like ways. To date, chess-playing programs have taken a so-called brute-force approach, in which they scan millions of possible moves until they find the best option for a particular stage in the game. It’s an effective process, but a time-consuming one.

Giraffe takes a bit of shortcut. It learns which moves it can automatically eliminate from its decision-making process, and just analyzes the remaining options, saving valuable time and computing power.

“Unlike most chess engines in existence today, Giraffe derives its playing strength…from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” the researchers wrote.

Giraffe learned to play chess from analyzing 175 million moves. Once the system was trained, its creators used the Strategic Test Suite, a set of 1,500 moves meant to test a machine’s “understanding” of strategy, to assess its chess-playing chops. The highest score is 15,000. On its first try, Giraffe earned a 6,000. Its highest score was 9,700. The researchers pitted Giraffe against eight other machines. Only one scored higher. Overall, the researchers wrote, Giraffe matches the level of a FIDE International Master, or the top 2.2% of tournament chess players.

The researchers used a combination of deep neural networks and reinforcement learning, two types of artificial intelligence recently used by Google Deepmind to train machines to play Atari games. As the Imperial College London researchers note in the paper, the big advantage of reinforcement learning is its generality. So far, it’s also been used to build machines that teach themselves to play backgammon and poker, and researchers are looking to apply it more broadly in robotics.

Deep neural networks, software programs whose architecture is modeled after the brain, are really good at recognizing patterns, just the type of capability you want built into an intelligent machine.

“It is clear that Giraffe’s evaluation function now has at least comparable positional understanding compared to evaluation functions of top engines in the world, which is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” the researchers wrote.

As with other research, the ultimate goal of Giraffe isn’t just to build computers that can beat humans at games, but also to understand how computers learn. There’s a whole line of research now that uses video games as proxies for real-world problems, like self-driving cars. The software that powers a robocar, for instance, could be trained to maneuver roads and avoid pedestrians on a videogame version before it was unleashed on the streets. Teaching a computer to teach itself to play chess could lead to other kinds of software solving problems on their own.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

 
Join the discussion...