SOTAVerified

Parallelizing Thompson Sampling

2021-06-02NeurIPS 2021Unverified0· sign in to hype

Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

How can we make use of information parallelism in online decision making problems while efficiently balancing the exploration-exploitation trade-off? In this paper, we introduce a batch Thompson Sampling framework for two canonical online decision making problems, namely, stochastic multi-arm bandit and linear contextual bandit with finitely many arms. Over a time horizon T, our batch Thompson Sampling policy achieves the same (asymptotic) regret bound of a fully sequential one while carrying out only O( T) batch queries. To achieve this exponential reduction, i.e., reducing the number of interactions from T to O( T), our batch policy dynamically determines the duration of each batch in order to balance the exploration-exploitation trade-off. We also demonstrate experimentally that dynamic batch allocation dramatically outperforms natural baselines such as static batch allocations.

Tasks

Reproductions