SOTAVerified

A Simple and Optimal Policy Design with Safety against Heavy-Tailed Risk for Stochastic Bandits

2022-06-07Unverified0· sign in to hype

David Simchi-Levi, Zeyu Zheng, Feng Zhu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the stochastic multi-armed bandit problem and design new policies that enjoy both worst-case optimality for expected regret and light-tailed risk for regret distribution. Specifically, our policy design (i) enjoys the worst-case optimality for the expected regret at order O(KT T) and (ii) has the worst-case tail probability of incurring a regret larger than any x>0 being upper bounded by (-(x/KT)), a rate that we prove to be best achievable with respect to T for all worst-case optimal policies. Our proposed policy achieves a delicate balance between doing more exploration at the beginning of the time horizon and doing more exploitation when approaching the end, compared to standard confidence-bound-based policies. We also enhance the policy design to accommodate the "any-time" setting where T is unknown a priori, and prove equivalently desired policy performances as compared to the "fixed-time" setting with known T. Numerical experiments are conducted to illustrate the theoretical findings. We find that from a managerial perspective, our new policy design yields better tail distributions and is preferable than celebrated policies especially when (i) there is a risk of under-estimating the volatility profile, or (ii) there is a challenge of tuning policy hyper-parameters. We conclude by extending our proposed policy design to the stochastic linear bandit setting that leads to both worst-case optimality in terms of expected regret and light-tailed risk on the regret distribution.

Tasks

Reproductions