SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 121130 of 655 papers

TitleStatusHype
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits0
Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification0
Thompson Sampling in Partially Observable Contextual Bandits0
Diffusion Models Meet Contextual Bandits with Large Action Spaces0
Tree Ensembles for Contextual Bandits0
Optimistic Thompson Sampling for No-Regret Learning in Unknown Games0
Context in Public Health for Underserved Communities: A Bayesian Approach to Online Restless Bandits0
Efficient Exploration for LLMs0
Accelerating Approximate Thompson Sampling with Underdamped Langevin Monte CarloCode0
Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis0
Show:102550
← PrevPage 13 of 66Next →

No leaderboard results yet.