SOTAVerified

Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies

2022-03-24Unverified0· sign in to hype

Zihan Zhang, Xiangyang Ji, Simon S. Du

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper gives the first polynomial-time algorithm for tabular Markov Decision Processes (MDP) that enjoys a regret bound independent on the planning horizon. Specifically, we consider tabular MDP with S states, A actions, a planning horizon H, total reward bounded by 1, and the agent plays for K episodes. We design an algorithm that achieves an O(poly(S,A, K)K) regret in contrast to existing bounds which either has an additional polylog(H) dependency~zhang2020reinforcement or has an exponential dependency on S~li2021settling. Our result relies on a sequence of new structural lemmas establishing the approximation power, stability, and concentration property of stationary policies, which can have applications in other problems related to Markov chains.

Tasks

Reproductions