SOTAVerified

Fine-Grained Gap-Dependent Bounds for Tabular MDPs via Adaptive Multi-Step Bootstrap

2021-02-09Unverified0· sign in to hype

Haike Xu, Tengyu Ma, Simon S. Du

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a new model-free algorithm for episodic finite-horizon Markov Decision Processes (MDP), Adaptive Multi-step Bootstrap (AMB), which enjoys a stronger gap-dependent regret bound. The first innovation is to estimate the optimal Q-function by combining an optimistic bootstrap with an adaptive multi-step Monte Carlo rollout. The second innovation is to select the action with the largest confidence interval length among admissible actions that are not dominated by any other actions. We show when each state has a unique optimal action, AMB achieves a gap-dependent regret bound that only scales with the sum of the inverse of the sub-optimality gaps. In contrast, Simchowitz and Jamieson (2019) showed all upper-confidence-bound (UCB) algorithms suffer an additional (S_min) regret due to over-exploration where _min is the minimum sub-optimality gap and S is the number of states. We further show that for general MDPs, AMB suffers an additional |Z_mul|_min regret, where Z_mul is the set of state-action pairs (s,a)'s satisfying a is a non-unique optimal action for s. We complement our upper bound with a lower bound showing the dependency on |Z_mul|_min is unavoidable for any consistent algorithm. This lower bound also implies a separation between reinforcement learning and contextual bandits.

Tasks

Reproductions