Everything Is F*cked(61)







Chapter 9


The Final Religion


In 1997, Deep Blue, a supercomputer developed by IBM, beat Garry Kasparov, the world’s best chess player. It was a watershed moment in the history of computing, a seismic event that shook many people’s understanding of technology, intelligence, and humanity. But today, it is but a quaint memory: of course a computer would beat the world champion at chess. Why wouldn’t it?

Since the beginning of computing, chess has been a favorite means to test artificial intelligence.1 That’s because chess possesses a near-infinite number of permutations: there are more possible chess games than there are atoms in the observable universe. In any board position, if one looks only three or four moves ahead, there are already hundreds of millions of variations.

For a computer to match a human player, not only must it be capable of calculating an incredible number of possible outcomes, but it must also have solid algorithms to help it decide what’s worth calculating. Put another way: to beat a human player, a computer’s Thinking Brain, despite being vastly superior to a human’s, must be programmed to evaluate more/less valuable board positions—that is, the computer must have a modestly powerful “Feeling Brain” programmed into it.2

Since that day in 1997, computers have continued to improve at chess at a staggering rate. Over the following fifteen years, the top human players regularly got pummeled by chess software, sometimes by embarrassing margins.3 Today, it’s not even close. Kasparov himself recently joked that the chess app that comes installed on most smartphones “is far more powerful than Deep Blue was.”4 These days, chess software developers hold tournaments for their programs to see whose algorithms come out on top. Humans are not only excluded from these tournaments, but they’d likely not even place high enough for it to matter anyway.

The undisputed champion of the chess software world for the past few years has been an open-source program called Stockfish. Stockfish has either won or been the runner-up in almost every significant chess software tournament since 2014. A collaboration between half a dozen lifelong chess software developers, Stockfish today represents the pinnacle of chess logic. Not only is it a chess engine, but it can analyze any game, any position, giving grandmaster-level feedback within seconds of each move a player makes.

Stockfish was happily going along being the king of the computerized chess mountain, being the gold standard of all chess analysis worldwide, until 2018, when Google showed up to the party.

Then shit got weird.

Google has a program called AlphaZero. It’s not chess software. It’s artificial intelligence (AI) software. Instead of being programmed to play chess or another game, the software is programmed to learn—and not just chess, but any game.

Early in 2018, Stockfish faced off against Google’s AlphaZero. On paper, it was not even close to a fair fight. AlphaZero can calculate “only” eighty thousand board positions per second. Stockfish? Seventy million. In terms of computational power, that’s like me entering a footrace against a Formula One race car.

But it gets even weirder: the day of the match, AlphaZero didn’t even know how to play chess. Yes, that’s right—before its match with the best chess software in the world, AlphaZero had less than a day to learn chess from scratch. The software spent most of the day running simulations of chess games against itself, learning as it went. It developed strategies and principles the same way a human would: through trial and error.

Imagine the scenario. You’ve just learned the rules of chess, one of the most complex games on the planet. You’re given less than a day to mess around with a board and figure out some strategies. And from there, your first game ever will be against the world champion.

Good luck.

Yet, somehow, AlphaZero won. Okay, it didn’t just win. AlphaZero smashed Stockfish. Out of one hundred games, AlphaZero won or drew every single game.

Read that again: a mere nine hours after learning the rules to chess, AlphaZero played the best chess-playing entity in the world and did not drop a single game out of one hundred. It was a result so unprecedented that people still don’t know what to make of it. Human grandmasters marveled at the creativity and ingenuity of AlphaZero. One, Peter Heine Nielsen, gushed, “I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.”5

When AlphaZero was done with Stockfish, it didn’t take a break. Pfft, please! Breaks are for frail humans. Instead, as soon as it had finished with Stockfish, AlphaZero began teaching itself the strategy game Shogi.

Shogi is often referred to as Japanese chess, but many argue that it’s more complex than chess.6 Whereas Kasparov lost to a computer in 1997, top Shogi players didn’t begin to lose to computers until 2013. Either way, AlphaZero destroyed the top Shogi software (called “Elmo”), and by a similarly astounding margin: in one hundred games, it won ninety, lost eight, and drew two. Once again, AlphaZero’s computational powers were far less than Elmo’s. (In this case, it could calculate forty thousand moves per second compared to Elmo’s thirty-five million.) And once again, AlphaZero hadn’t even known how to play the game the previous day.

In the morning, it taught itself two infinitely complex games. And by sundown, it had dismantled the best-known competition on earth.

News flash: AI is coming. And while chess and Shogi are one thing, as soon as we take AI out of the board games and start putting it in the board rooms . . . well, you and I and everyone else will probably find ourselves out of a job.7

Mark Manson's Books