SOTAVerified

ExIt-OOS: Towards Learning from Planning in Imperfect Information Games

2018-08-30Code Available0· sign in to hype

Andy Kitchen, Michela Benedetti

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The current state of the art in playing many important perfect information games, including Chess and Go, combines planning and deep reinforcement learning with self-play. We extend this approach to imperfect information games and present ExIt-OOS, a novel approach to playing imperfect information games within the Expert Iteration framework and inspired by AlphaZero. We use Online Outcome Sampling, an online search algorithm for imperfect information games in place of MCTS. While training online, our neural strategy is used to improve the accuracy of playouts in OOS, allowing a learning and planning feedback loop for imperfect information games.

Tasks

Reproductions