Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?
Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H. S. Torr, Mingfei Sun, Shimon Whiteson
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/facebookresearch/benchmarlpytorch★ 588
- github.com/cyanrain7/trpo-in-marlpytorch★ 224
- github.com/cyanrain7/trust-region-policy-optimisation-in-multi-agent-reinforcement-learningpytorch★ 224
- github.com/chauncygu/multi-agent-constrained-policy-optimisationpytorch★ 222
- github.com/morning9393/HAPPO-HATRPOpytorch★ 45
- github.com/anonymous-iclr22/trust-region-in-multi-agent-reinforcement-learningpytorch★ 11
- github.com/16444take/aope-simpytorch★ 0
Abstract
Most recently developed approaches to cooperative multi-agent reinforcement learning in the centralized training with decentralized execution setting involve estimating a centralized, joint value function. In this paper, we demonstrate that, despite its various theoretical shortcomings, Independent PPO (IPPO), a form of independent learning in which each agent simply estimates its local value function, can perform just as well as or better than state-of-the-art joint learning approaches on popular multi-agent benchmark suite SMAC with little hyperparameter tuning. We also compare IPPO to several variants; the results suggest that IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.