SOTAVerified

Explore Reinforced: Equilibrium Approximation with Reinforcement Learning

2024-12-02Unverified0· sign in to hype

Ryan Yu, Mateusz Nowak, Qintong Xie, Michelle Yilin Feng, Peter Chin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Current approximate Coarse Correlated Equilibria (CCE) algorithms struggle with equilibrium approximation for games in large stochastic environments but are theoretically guaranteed to converge to a strong solution concept. In contrast, modern Reinforcement Learning (RL) algorithms provide faster training yet yield weaker solutions. We introduce Exp3-IXrl - a blend of RL and game-theoretic approach, separating the RL agent's action selection from the equilibrium computation while preserving the integrity of the learning process. We demonstrate that our algorithm expands the application of equilibrium approximation algorithms to new environments. Specifically, we show the improved performance in a complex and adversarial cybersecurity network environment - the Cyber Operations Research Gym - and in the classical multi-armed bandit settings.

Tasks

Reproductions