Reducing Overestimation Bias in Multi-Agent Domains Using Double Centralized Critics
Johannes Ackermann, Volker Gabler, Takayuki Osa, Masashi Sugiyama
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/JohannesAck/MATD3implementationOfficialtf★ 0
- github.com/fdcl-gwu/gym-rotorpytorch★ 0
- github.com/JohannesAck/tf2multiagentrltf★ 0
Abstract
Many real world tasks require multiple agents to work together. Multi-agent reinforcement learning (RL) methods have been proposed in recent years to solve these tasks, but current methods often fail to efficiently learn policies. We thus investigate the presence of a common weakness in single-agent RL, namely value function overestimation bias, in the multi-agent setting. Based on our findings, we propose an approach that reduces this bias by using double centralized critics. We evaluate it on six mixed cooperative-competitive tasks, showing a significant advantage over current methods. Finally, we investigate the application of multi-agent methods to high-dimensional robotic tasks and show that our approach can be used to learn decentralized policies in this domain.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| ParticleEnvs Cooperative Communication | MATD3 | final agent reward | -14 | — | Unverified |