Logarithmic Regret for Reinforcement Learning with Linear Function Approximation
Jiafan He, Dongruo Zhou, Quanquan Gu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Reinforcement learning (RL) with linear function approximation has received increasing attention recently. However, existing work has focused on obtaining T-type regret bound, where T is the number of interactions with the MDP. In this paper, we show that logarithmic regret is attainable under two recently proposed linear MDP assumptions provided that there exists a positive sub-optimality gap for the optimal action-value function. More specifically, under the linear MDP assumption (Jin et al. 2019), the LSVI-UCB algorithm can achieve O(d^3H^5/gap_min (T)) regret; and under the linear mixture MDP assumption (Ayoub et al. 2020), the UCRL-VTR algorithm can achieve O(d^2H^5/gap_min ^3(T)) regret, where d is the dimension of feature mapping, H is the length of episode, gap_min is the minimal sub-optimality gap, and O hides all logarithmic terms except (T). To the best of our knowledge, these are the first logarithmic regret bounds for RL with linear function approximation. We also establish gap-dependent lower bounds for the two linear MDP models.