SOTAVerified

Residual Bootstrap Exploration for Bandit Algorithms

2020-02-19Unverified0· sign in to hype

Chi-Hua Wang, Yang Yu, Botao Hao, Guang Cheng

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose a novel perturbation-based exploration method in bandit algorithms with bounded or unbounded rewards, called residual bootstrap exploration (ReBoot). The ReBoot enforces exploration by injecting data-driven randomness through a residual-based perturbation mechanism. This novel mechanism captures the underlying distributional properties of fitting errors, and more importantly boosts exploration to escape from suboptimal solutions (for small sample sizes) by inflating variance level in an unconventional way. In theory, with appropriate variance inflation level, ReBoot provably secures instance-dependent logarithmic regret in Gaussian multi-armed bandits. We evaluate the ReBoot in different synthetic multi-armed bandits problems and observe that the ReBoot performs better for unbounded rewards and more robustly than Giro kveton2018garbage and PHE kveton2019perturbed, with comparable computational efficiency to the Thompson sampling method.

Tasks

Reproductions