SOTAVerified

Entropy Regularization for Mean Field Games with Learning

2020-09-30Unverified0· sign in to hype

Xin Guo, Renyuan Xu, Thaleia Zariphopoulou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Entropy regularization has been extensively adopted to improve the efficiency, the stability, and the convergence of algorithms in reinforcement learning. This paper analyzes both quantitatively and qualitatively the impact of entropy regularization for Mean Field Game (MFG) with learning in a finite time horizon. Our study provides a theoretical justification that entropy regularization yields time-dependent policies and, furthermore, helps stabilizing and accelerating convergence to the game equilibrium. In addition, this study leads to a policy-gradient algorithm for exploration in MFG. Under this algorithm, agents are able to learn the optimal exploration scheduling, with stable and fast convergence to the game equilibrium.

Tasks

Reproductions