VOQL: Towards Optimal Regret in Model-free RL with Nonlinear Function Approximation
Alekh Agarwal, Yujia Jin, Tong Zhang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study time-inhomogeneous episodic reinforcement learning (RL) under general function approximation and sparse rewards. We design a new algorithm, Variance-weighted Optimistic Q-Learning (VOQL), based on Q-learning and bound its regret assuming completeness and bounded Eluder dimension for the regression function class. As a special case, VOQL achieves O(dHT+d^6H^5) regret over T episodes for a horizon H MDP under (d-dimensional) linear function approximation, which is asymptotically optimal. Our algorithm incorporates weighted regression-based upper and lower bounds on the optimal value function to obtain this improved regret. The algorithm is computationally efficient given a regression oracle over the function class, making this the first computationally tractable and statistically optimal approach for linear MDPs.