Efficient and Adaptive Posterior Sampling Algorithms for Bandits
Bingshan Hu, Zhiming Huang, Tianyue H. Zhang, Mathias Lécuyer, Nidhi Hegde
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study Thompson Sampling-based algorithms for stochastic bandits with bounded rewards. As the existing problem-dependent regret bound for Thompson Sampling with Gaussian priors [Agrawal and Goyal, 2017] is vacuous when T 288 e^64, we derive a more practical bound that tightens the coefficient of the leading term %from 288 e^64 to 1270. Additionally, motivated by large-scale real-world applications that require scalability, adaptive computational resource allocation, and a balance in utility and computation, we propose two parameterized Thompson Sampling-based algorithms: Thompson Sampling with Model Aggregation (TS-MA-) and Thompson Sampling with Timestamp Duelling (TS-TD-), where [0,1] controls the trade-off between utility and computation. Both algorithms achieve O (K^+1(T)/ ) regret bound, where K is the number of arms, T is the finite learning horizon, and denotes the single round performance loss when pulling a sub-optimal arm.