SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 201210 of 655 papers

TitleStatusHype
Sequential Best-Arm Identification with Application to Brain-Computer Interface0
Thompson Sampling for Parameterized Markov Decision Processes with Uninformative Actions0
Trajectory-oriented optimization of stochastic epidemiological modelsCode0
An improved regret analysis for UCB-N and TS-N0
Kullback-Leibler Maillard Sampling for Multi-armed Bandits with Bounded RewardsCode0
Thompson Sampling Regret Bounds for Contextual Bandits with sub-Gaussian rewards0
Efficiently Tackling Million-Dimensional Multiobjective Problems: A Direction Sampling and Fine-Tuning Approach0
Sharp Deviations Bounds for Dirichlet Weighted Sums with Application to analysis of Bayesian algorithms0
GUTS: Generalized Uncertainty-Aware Thompson Sampling for Multi-Agent Active Search0
Adaptive Experimentation at Scale: A Computational Framework for Flexible Batches0
Show:102550
← PrevPage 21 of 66Next →

No leaderboard results yet.