SOTAVerified

Provably Adaptive Average Reward Reinforcement Learning for Metric Spaces

2024-10-25Unverified0· sign in to hype

Avik Kar, Rahul Singh

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study infinite-horizon average-reward reinforcement learning (RL) for Lipschitz MDPs and develop an algorithm ZoRL that discretizes the state-action space adaptively and zooms into promising regions of the state-action space. We show that its regret can be bounded as O(T^1 - d_eff.^-1), where d_eff. = 2d_S + d_z + 3, d_S is the dimension of the state space, and d_z is the zooming dimension. d_z is a problem-dependent quantity, which allows us to conclude that if MDP is benign, then its regret will be small. We note that the existing notion of zooming dimension for average reward RL is defined in terms of policy coverings, and hence it can be huge when the policy class is rich even though the underlying MDP is simple, so that the regret upper bound is nearly O(T). The zooming dimension proposed in the current work is bounded above by d, the dimension of the state-action space, and hence is truly adaptive, i.e., shows how to capture adaptivity gains for infinite-horizon average-reward RL. ZoRL outperforms other state-of-the-art algorithms in experiments; thereby demonstrating the gains arising due to adaptivity.

Tasks

Reproductions