SOTAVerified

Self-Play PSRO: Toward Optimal Populations in Two-Player Zero-Sum Games

2022-07-13Unverified0· sign in to hype

Stephen Mcaleer, JB Lanier, Kevin Wang, Pierre Baldi, Roy Fox, Tuomas Sandholm

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In competitive two-agent environments, deep reinforcement learning (RL) methods based on the Double Oracle (DO) algorithm, such as Policy Space Response Oracles (PSRO) and Anytime PSRO (APSRO), iteratively add RL best response policies to a population. Eventually, an optimal mixture of these population policies will approximate a Nash equilibrium. However, these methods might need to add all deterministic policies before converging. In this work, we introduce Self-Play PSRO (SP-PSRO), a method that adds an approximately optimal stochastic policy to the population in each iteration. Instead of adding only deterministic best responses to the opponent's least exploitable population mixture, SP-PSRO also learns an approximately optimal stochastic policy and adds it to the population as well. As a result, SP-PSRO empirically tends to converge much faster than APSRO and in many games converges in just a few iterations.

Tasks

Reproductions