SOTAVerified

Multi-agent Reinforcement Learning for Networked System Control

2020-04-03ICLR 2020Code Available0· sign in to hype

Tianshu Chu, Sandeep Chinchali, Sachin Katti

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent. Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.

Tasks

Reproductions