Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks
Georgios Papoudakis, Filippos Christianos, Lukas Schäfer, Stefano V. Albrecht
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/uoe-agents/epymarlOfficialIn paperpytorch★ 702
- github.com/uoe-agents/robotic-warehouseOfficialIn papernone★ 74
- github.com/uoe-agents/lb-foragingOfficialIn papernone★ 57
- github.com/semitable/robotic-warehousenone★ 416
- github.com/ailabdsunipi/pymarlzoopluspytorch★ 47
- github.com/uoe-agents/revisiting-maddpgpytorch★ 27
- github.com/ShashwatNigam99/MARBLERnone★ 13
- github.com/dtabas/epymarlpytorch★ 1
- github.com/kinalmehta/epymarlpytorch★ 0
Abstract
Multi-agent deep reinforcement learning (MARL) suffers from a lack of commonly-used evaluation tasks and criteria, making comparisons between approaches difficult. In this work, we provide a systematic evaluation and comparison of three different classes of MARL algorithms (independent learning, centralised multi-agent policy gradient, value decomposition) in a diverse range of cooperative multi-agent learning tasks. Our experiments serve as a reference for the expected performance of algorithms across different learning tasks, and we provide insights regarding the effectiveness of different learning approaches. We open-source EPyMARL, which extends the PyMARL codebase to include additional algorithms and allow for flexible configuration of algorithm implementation details such as parameter sharing. Finally, we open-source two environments for multi-agent research which focus on coordination under sparse rewards.