SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 331340 of 655 papers

TitleStatusHype
Scalable regret for learning to control network-coupled subsystems with unknown dynamics0
Batched Thompson Sampling for Multi-Armed Bandits0
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models0
Debiasing Samples from Online Learning Using Bootstrap0
Adaptively Optimize Content Recommendation Using Multi Armed Bandit Algorithms in E-commerce0
From Predictions to Decisions: The Importance of Joint Predictive Distributions0
GuideBoot: Guided Bootstrap for Deep Contextual Bandits0
No Regrets for Learning the Prior in Bandits0
Metalearning Linear Bandits by Prior Update0
Bayesian decision-making under misspecified priors with applications to meta-learning0
Show:102550
← PrevPage 34 of 66Next →

No leaderboard results yet.