SOTAVerified

First-Order Bayesian Regret Analysis of Thompson Sampling

2019-02-02Unverified0· sign in to hype

Sébastien Bubeck, Mark Sellke

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We address online combinatorial optimization when the player has a prior over the adversary's sequence of losses. In this framework, Russo and Van Roy proposed an information-theoretic analysis of Thompson Sampling based on the information ratio, resulting in optimal worst-case regret bounds. In this paper we introduce three novel ideas to this line of work. First we propose a new quantity, the scale-sensitive information ratio, which allows us to obtain more refined first-order regret bounds (i.e., bounds of the form L^* where L^* is the loss of the best combinatorial action). Second we replace the entropy over combinatorial actions by a coordinate entropy, which allows us to obtain the first optimal worst-case bound for Thompson Sampling in the combinatorial setting. Finally, we introduce a novel link between Bayesian agents and frequentist confidence intervals. Combining these ideas we show that the classical multi-armed bandit first-order regret bound O(d L^*) still holds true in the more challenging and more general semi-bandit scenario. This latter result improves the previous state of the art bound O((d+m^3)L^*) by Lykouris, Sridharan and Tardos. Moreover we sharpen these results with two technical ingredients. The first leverages a recent insight of Zimmert and Lattimore to replace Shannon entropy with more refined potential functions in the analysis. The second is a Thresholded Thompson sampling algorithm, which slightly modifies the original algorithm by never playing low-probability actions. This thresholding results in fully T-independent regret bounds when L^* is almost surely upper-bounded, which we show does not hold for ordinary Thompson sampling.

Tasks

Reproductions