SOTAVerified

Heavy-Tailed Linear Bandits: Huber Regression with One-Pass Update

2025-03-01Unverified0· sign in to hype

Jing Wang, Yu-Jie Zhang, Peng Zhao, Zhi-Hua Zhou

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the stochastic linear bandits with heavy-tailed noise. Two principled strategies for handling heavy-tailed noise, truncation and median-of-means, have been introduced to heavy-tailed bandits. Nonetheless, these methods rely on specific noise assumptions or bandit structures, limiting their applicability to general settings. The recent work [Huang et al.2024] develops a soft truncation method via the adaptive Huber regression to address these limitations. However, their method suffers undesired computational cost: it requires storing all historical data and performing a full pass over these data at each round. In this paper, we propose a one-pass algorithm based on the online mirror descent framework. Our method updates using only current data at each round, reducing the per-round computational cost from O(t T) to O(1) with respect to current round t and the time horizon T, and achieves a near-optimal and variance-aware regret of order O(d T^1-2(1+) _t=1^T _t^2 + d T^1-2(1+)) where d is the dimension and _t^1+ is the (1+)-th central moment of reward at round t.

Tasks

Reproductions