SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 321330 of 655 papers

TitleStatusHype
Batched Thompson Sampling for Multi-Armed Bandits0
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models0
Debiasing Samples from Online Learning Using Bootstrap0
Adaptively Optimize Content Recommendation Using Multi Armed Bandit Algorithms in E-commerce0
From Predictions to Decisions: The Importance of Joint Predictive Distributions0
GuideBoot: Guided Bootstrap for Deep Contextual Bandits0
No Regrets for Learning the Prior in Bandits0
Metalearning Linear Bandits by Prior Update0
Bayesian decision-making under misspecified priors with applications to meta-learning0
Markov Decision Process modeled with Bandits for Sequential Decision Making in Linear-flow0
Show:102550
← PrevPage 33 of 66Next →

No leaderboard results yet.