SOTAVerified

Retaining Suboptimal Actions to Follow Shifting Optima in Multi-Agent Reinforcement Learning

2026-02-19Code Available0· sign in to hype

Yonghyeon Jo, Sunwoo Lee, Seungyul Han

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Value decomposition is a core approach for cooperative multi-agent reinforcement learning (MARL). However, existing methods still rely on a single optimal action and struggle to adapt when the underlying value function shifts during training, often converging to suboptimal policies. To address this limitation, we propose Successive Sub-value Q-learning (S2Q), which learns multiple sub-value functions to retain alternative high-value actions. Incorporating these sub-value functions into a Softmax-based behavior policy, S2Q encourages persistent exploration and enables Q^tot to adjust quickly to the changing optima. Experiments on challenging MARL benchmarks confirm that S2Q consistently outperforms various MARL algorithms, demonstrating improved adaptability and overall performance. Our code is available at https://github.com/hyeon1996/S2Q.

Reproductions