SOTAVerified

Online Reinforcement Learning in Stochastic Games

2017-12-02NeurIPS 2017Unverified0· sign in to hype

Chen-Yu Wei, Yi-Te Hong, Chi-Jen Lu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study online reinforcement learning in average-reward stochastic games (SGs). An SG models a two-player zero-sum game in a Markov environment, where state transitions and one-step payoffs are determined simultaneously by a learner and an adversary. We propose the UCSG algorithm that achieves a sublinear regret compared to the game value when competing with an arbitrary opponent. This result improves previous ones under the same setting. The regret bound has a dependency on the diameter, which is an intrinsic value related to the mixing property of SGs. If we let the opponent play an optimistic best response to the learner, UCSG finds an -maximin stationary policy with a sample complexity of O(poly(1/)), where is the gap to the best policy.

Tasks

Reproductions