SOTAVerified

Multi-Agent Reinforcement Learning with Submodular Reward

2026-03-06Unverified0· sign in to hype

Wenjing Chen, Chengyuan Qian, Shuo Xing, Yi Zhou, Victoria Crawford

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we study cooperative multi-agent reinforcement learning (MARL) where the joint reward exhibits submodularity, which is a natural property capturing diminishing marginal returns when adding agents to a team. Unlike standard MARL with additive rewards, submodular rewards model realistic scenarios where agent contributions overlap (e.g., multi-drone surveillance, collaborative exploration). We provide the first formal framework for this setting and develop algorithms with provable guarantees on sample efficiency and regret bound. For known dynamics, our greedy policy optimization achieves a 1/2-approximation with polynomial complexity in the number of agents K, overcoming the exponential curse of dimensionality inherent in joint policy optimization. For unknown dynamics, we propose a UCB-based learning algorithm achieving a 1/2-regret of O(H^2KSAT) over T episodes.

Reproductions