SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 141150 of 655 papers

TitleStatusHype
Better Optimism By Bayes: Adaptive Planning with Rich Models0
Blind Exploration and Exploitation of Stochastic Experts0
Bootstrapped Thompson Sampling and Deep Exploration0
BOTS: Batch Bayesian Optimization of Extended Thompson Sampling for Severely Episode-Limited RL Settings0
Calibrated Fairness in Bandits0
A Note on Information-Directed Sampling and Thompson Sampling0
An Unbiased Data Collection and Content Exploitation/Exploration Strategy for Personalization0
Causal Bandits without prior knowledge using separating sets0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Bayesian Quantile and Expectile Optimisation0
Show:102550
← PrevPage 15 of 66Next →

No leaderboard results yet.