Q-learning with Nearest Neighbors
Devavrat Shah, Qiaomin Xie
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a d-dimensional state space and the discounted factor (0,1), given an arbitrary sample path with "covering time" L , we establish that the algorithm is guaranteed to output an -accurate estimate of the optimal Q-function using O(L/(^3(1-)^7)) samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as O(1/^d), so the sample complexity scales as O(1/^d+3). Indeed, we establish a lower bound that argues that the dependence of (1/^d+2) is necessary.