SOTAVerified

Homogenization of Multi-agent Learning Dynamics in Finite-state Markov Games

2025-06-26Code Available0· sign in to hype

Yann Kerzreho

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper introduces a new approach for approximating the learning dynamics of multiple reinforcement learning (RL) agents interacting in a finite-state Markov game. The idea is to rescale the learning process by simultaneously reducing the learning rate and increasing the update frequency, effectively treating the agent's parameters as a slow-evolving variable influenced by the fast-mixing game state. Under mild assumptions-ergodicity of the state process and continuity of the updates-we prove the convergence of this rescaled process to an ordinary differential equation (ODE). This ODE provides a tractable, deterministic approximation of the agent's learning dynamics. An implementation of the framework is available at\,: https://github.com/yannKerzreho/MarkovGameApproximation

Tasks

Reproductions