SOTAVerified

O(T^-1) Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games

2024-02-02Unverified0· sign in to hype

Weichao Mao, Haoran Qiu, Chen Wang, Hubertus Franke, Zbigniew Kalbarczyk, Tamer Başar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

No-regret learning has a long history of being closely connected to game theory. Recent works have devised uncoupled no-regret learning dynamics that, when adopted by all the players in normal-form games, converge to various equilibrium solutions at a near-optimal rate of O(T^-1), a significant improvement over the O(1/T) rate of classic no-regret learners. However, analogous convergence results are scarce in Markov games, a more generic setting that lays the foundation for multi-agent reinforcement learning. In this work, we close this gap by showing that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with appropriate value update procedures, can find O(T^-1)-approximate (coarse) correlated equilibria in full-information general-sum Markov games within T iterations. Numerical results are also included to corroborate our theoretical findings.

Tasks

Reproductions