SOTAVerified

Natural Actor-Critic Converges Globally for Hierarchical Linear Quadratic Regulator

2019-12-14Unverified0· sign in to hype

Yuwei Luo, Zhuoran Yang, Zhaoran Wang, Mladen Kolar

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Multi-agent reinforcement learning has been successfully applied to a number of challenging problems. Despite these empirical successes, theoretical understanding of different algorithms is lacking, primarily due to the curse of dimensionality caused by the exponential growth of the state-action space with the number of agents. We study a fundamental problem of multi-agent linear quadratic regulator (LQR) in a setting where the agents are partially exchangeable. In this setting, we develop a hierarchical actor-critic algorithm, whose computational complexity is independent of the total number of agents, and prove its global linear convergence to the optimal policy. As LQRs are often used to approximate general dynamic systems, this paper provides an important step towards a better understanding of general hierarchical mean-field multi-agent reinforcement learning.

Tasks

Reproductions