SOTAVerified

Independent Policy Gradient Methods for Competitive Reinforcement Learning

2021-01-11NeurIPS 2020Unverified0· sign in to hype

Constantinos Daskalakis, Dylan J. Foster, Noah Golowich

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We obtain global, non-asymptotic convergence guarantees for independent learning algorithms in competitive reinforcement learning settings with two agents (i.e., zero-sum stochastic games). We consider an episodic setting where in each episode, each player independently selects a policy and observes only their own actions and rewards, along with the state. We show that if both players run policy gradient methods in tandem, their policies will converge to a min-max equilibrium of the game, as long as their learning rates follow a two-timescale rule (which is necessary). To the best of our knowledge, this constitutes the first finite-sample convergence result for independent policy gradient methods in competitive RL; prior work has largely focused on centralized, coordinated procedures for equilibrium computation.

Tasks

Reproductions