Variance-aware robust reinforcement learning with linear function approximation under heavy-tailed rewards
Xiang Li, Qiang Sun
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper presents two algorithms, AdaOFUL and VARA, for online sequential decision-making in the presence of heavy-tailed rewards with only finite variances. For linear stochastic bandits, we address the issue of heavy-tailed rewards by modifying the adaptive Huber regression and proposing AdaOFUL. AdaOFUL achieves a state-of-the-art regret bound of O(d(_t=1^T _t^2)^1/2+d) as if the rewards were uniformly bounded, where _t^2 is the observed conditional variance of the reward at round t, d is the feature dimension, and O() hides logarithmic dependence. Building upon AdaOFUL, we propose VARA for linear MDPs, which achieves a tighter variance-aware regret bound of O(dHG^*K). Here, H is the length of episodes, K is the number of episodes, and G^* is a smaller instance-dependent quantity that can be bounded by other instance-dependent quantities when additional structural conditions on the MDP are satisfied. Our regret bound is superior to the current state-of-the-art bounds in three ways: (1) it depends on a tighter instance-dependent quantity and has optimal dependence on d and H, (2) we can obtain further instance-dependent bounds of G^* under additional structural conditions on the MDP, and (3) our regret bound is valid even when rewards have only finite variances, achieving a level of generality unmatched by previous works. Overall, our modified adaptive Huber regression algorithm may serve as a useful building block in the design of algorithms for online problems with heavy-tailed rewards.