SOTAVerified

Mean-Field Sampling for Cooperative Multi-Agent Reinforcement Learning

2024-12-01Unverified0· sign in to hype

Emile Anand, Ishani Karmarkar, Guannan Qu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Designing efficient algorithms for multi-agent reinforcement learning (MARL) is fundamentally challenging because the size of the joint state and action spaces grows exponentially in the number of agents. These difficulties are exacerbated when balancing sequential global decision-making with local agent interactions. In this work, we propose a new algorithm SUBSAMPLE-MFQ (Subsample-Mean-Field-Q-learning) and a decentralized randomized policy for a system with n agents. For any k n, our algorithm learns a policy for the system in time polynomial in k. We prove that this learned policy converges to the optimal policy on the order of O(1/k) as the number of subsampled agents k increases. In particular, this bound is independent of the number of agents n.

Tasks

Reproductions