No, we’re not talking about Global Thermonuclear War, or even that version of Stratego where you only get 5 bombs because you sister lost one of the pieces. Nope. Turns out that the evil overlords at Google… excuse me, Alphabet… have been training a new supercomputer to master the game of Go. It also turns out that AlphaGo (the computer’s name), is damned good at it.
It’s been coming for some time. IBM’s Deep Blue supercomputer beat world chess champion Garry Kasparov in 1997, and IBM’s Watson beat all comers — including former champions — at Jeopardy in 2011. But Go has been considered too difficult a challenge for computers to solve. There are so many permutations in the game, that it would be impossible to calculate a brute-force solution in any reasonable time. “There’s more configurations of the board than there are atoms in the universe,” Demis Hassabis, the CEO of Google DeepMind, said in a video released by Nature. Instead, any computer wanting to win at Go would have to think like a human.
According to Quartz, “Players often choose moves, Hassabis said, because they “felt right”—which is not a way a computer program acts. DeepMind’s solution was to build two neural networks—two computer systems that are modeled after the human brain, and can be trained on large data sets to perform certain tasks based on the knowledge it’s accrued. One of the networks, called a “value network” evaluates the computer’s positions on the board, and the other, a “policy network” decides where to move. Instead of evaluating every possible move, it selects a few moves that it senses to likely be potentially good moves.”
In October, DeepMind’s AlphaGo took on the reigning European Go champion, Fan Hui, and smoked him, 5-0. Using its new two-brain calculating method, DeepMind’s researchers said in their paper that AlphaGo “evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov.”
At the last minute, the Nature editor covering the event realized he was looking at the beginning of the end of the human race. Senior editor Tanguy Chouard served as a moderator for the game, and said, “It was one of the most exciting moments in my career, but one couldn’t help but root for the poor human being getting beaten.”
“Humans have weaknesses. They get tired when they play a very long match. They can play mistakes,” Hassabis said, adding to the whole “This is a terminator who plays board games” vibe. “Humans have a limitation in terms of the actual number of Go games that they’re able to process in a lifetime. AlphaGo can play through millions of games every single day.”
Don’t fear, though. It may not be Google’s supercomputers who take over the world. In December, a prominent AI researcher, Rémi Coulom—who’s spent years trying to crack the game, and is even cited in DeepMind’s research paper—told Wired that he believed someone would crack the game in the next ten years. Facebook CEO, Mark Zuckerberg posted this morning that his company’s AI researchers are also pretty close to beating the game.
Indeed. One day we may all be rooting for the poor human.