Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov Decision Processes
Dongruo Zhou, Quanquan Gu, Csaba Szepesvari
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study reinforcement learning (RL) with linear function approximation where the underlying transition probability kernel of the Markov decision process (MDP) is a linear mixture model (Jia et al., 2020; Ayoub et al., 2020; Zhou et al., 2020) and the learning agent has access to either an integration or a sampling oracle of the individual basis kernels. We propose a new Bernstein-type concentration inequality for self-normalized martingales for linear bandit problems with bounded noise. Based on the new inequality, we propose a new, computationally efficient algorithm with linear function approximation named UCRL-VTR^+ for the aforementioned linear mixture MDPs in the episodic undiscounted setting. We show that UCRL-VTR^+ attains an O(dHT) regret where d is the dimension of feature mapping, H is the length of the episode and T is the number of interactions with the MDP. We also prove a matching lower bound (dHT) for this setting, which shows that UCRL-VTR^+ is minimax optimal up to logarithmic factors. In addition, we propose the UCLK^+ algorithm for the same family of MDPs under discounting and show that it attains an O(dT/(1-)^1.5) regret, where [0,1) is the discount factor. Our upper bound matches the lower bound (dT/(1-)^1.5) proved by Zhou et al. (2020) up to logarithmic factors, suggesting that UCLK^+ is nearly minimax optimal. To the best of our knowledge, these are the first computationally efficient, nearly minimax optimal algorithms for RL with linear function approximation.