SOTAVerified

Fully-Decentralized MADDPG with Networked Agents

2025-03-09Unverified0· sign in to hype

Diego Bolliger, Lorenz Zauter, Robert Ziegler

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we devise three actor-critic algorithms with decentralized training for multi-agent reinforcement learning in cooperative, adversarial, and mixed settings with continuous action spaces. To this goal, we adapt the MADDPG algorithm by applying a networked communication approach between agents. We introduce surrogate policies in order to decentralize the training while allowing for local communication during training. The decentralized algorithms achieve comparable results to the original MADDPG in empirical tests, while reducing computational cost. This is more pronounced with larger numbers of agents.

Tasks

Reproductions