SOTAVerified

Q-learning with UCB Exploration is Sample Efficient for Infinite-Horizon MDP

2019-01-27ICLR 2020Unverified0· sign in to hype

Kefan Dong, Yuanhao Wang, Xiaoyu Chen, Li-Wei Wang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. jin2018q proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model. We show that the sample complexity of exploration of our algorithm is bounded by O(SA^2(1-)^7). This improves the previously best known result of O(SA^4(1-)^8) in this setting achieved by delayed Q-learning strehl2006pac, and matches the lower bound in terms of as well as S and A except for logarithmic factors.

Tasks

Reproductions