First, Artificial Intelligence created by Google’s DeepMind mastered the game Go; now, it’s beating professionals at StarCraft, a popular sci-fi strategy game.
An AI program named AlphaStar recently won all 10 matches it played against two professional players of the video game StarCraft II, DeepMind said Thursday.
The feat indicates computers are getting better at solving complicated problems. It is something DeepMind, a London-based Artificial Intelligence company Google (GOOG) purchased in 2014, has focused on for years.
The company previously gained notoriety for building AI called AlphaGo, which beat professional players of the 2,500-year-old game Go. That was a feat computer scientists had long struggled to achieve with AI since Go, which involves players alternating at placing black and white stones on a 19-by-19 grid, can be played with a near infinite number of moves.
StarCraft II is also much trickier for computers to master than many other games because of its complexity and dependence on strategy. In the sci-fi game, players can take on the roles of three different galactic groups (Terran, Zerg, or Protoss) and fight to control the galaxy.
In a blog post on Thursday, DeepMind outlined challenges that its AI faced learning to play StarCraft II, such as the inability of players to see everything that’s happening at once and the use of continuous gameplay (rather than players taking turns).
In December, AlphaStar played as a Protoss and won five games against Dario Wünsch, a German player who goes by the gamer handle TLO and who also played as a Protoss (although it is not the group in which he specializes). A week later, the AI won five games again, this time against a tougher Protoss competitor: Grzegorz Komincz, a professional gamer from Poland who goes by the name MaNa. DeepMind announced the victories during a live stream on YouTube and Twitch.
The researchers used a sort of tournament-style approach to train AlphaStar. First, they spent three days training a neural network — a machine-learning algorithm modeled after the way neurons work in a brain — on replays of human players’ StarCraft II games. This neural network was used to create a number of computer-based competitors that played many, many rounds of the game against each other, learning from their experiences, over the course of two weeks.
DeepMind said it used five of the top AI competitors — so, five different versions of AlphaStar — to play each of the games against Komincz and Wünsch. It also used the week between the matches to improve the AI.
David Silver, co-lead researcher at DeepMind, said the team building AlphaStar thought a lot about fairness and wanted the bot to play in a way similar to humans. For example, by not taking many more actions per minute in the game than a person would.
Still, the AI trounced the humans repeatedly. “Every single game I was, again, in the dark,” said Wünsch on the live stream, adding that he had to keep figuring out a new strategy. Komincz, at least, earned some redemption on Thursday: he bested an even newer version of AlphaStar during a live-streamed exhibition match of the game.
Though AlphaStar has just been trained to play StarCraft II as a Protoss, DeepMind research scientist Oriol Vinyals said the company plans to train it to play the game’s other groups as well.