SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 141150 of 655 papers

TitleStatusHype
Influencing Bandits: Arm Selection for Preference Shaping0
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits0
Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification0
Thompson Sampling in Partially Observable Contextual Bandits0
Diffusion Models Meet Contextual Bandits with Large Action Spaces0
Tree Ensembles for Contextual Bandits0
Context in Public Health for Underserved Communities: A Bayesian Approach to Online Restless Bandits0
Optimistic Thompson Sampling for No-Regret Learning in Unknown Games0
Efficient Exploration for LLMs0
Accelerating Approximate Thompson Sampling with Underdamped Langevin Monte CarloCode0
Show:102550
← PrevPage 15 of 66Next →

No leaderboard results yet.