SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 301325 of 655 papers

TitleStatusHype
Variational Bayesian Optimistic Sampling0
Differentially Private Federated Bayesian Optimization with Distributed Exploration0
Analysis of Thompson Sampling for Partially Observable Contextual Multi-Armed Bandits0
Diversified Sampling for Batched Bayesian Optimization with Determinantal Point Processes0
Show Me the Whole World: Towards Entire Item Space Exploration for Interactive Personalized RecommendationsCode0
EE-Net: Exploitation-Exploration Neural Networks in Contextual BanditsCode1
Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning0
Batched Thompson Sampling0
Asymptotic Performance of Thompson Sampling in the Batched Multi-Armed Bandits0
Regularized-OFU: an efficient algorithm for general contextual bandit with optimization oracles0
Expected Improvement-based Contextual Bandits0
Apple Tasting Revisited: Bayesian Approaches to Partially Monitored Online Binary Classification0
Deep Exploration for Recommendation Systems0
Vaccine allocation policy optimization and budget sharing mechanism using Thompson samplingCode0
Online Learning of Network Bottlenecks via Minimax Paths0
Machine Learning for Online Algorithm Selection under Censored FeedbackCode0
Thompson Sampling for Bandits with Clustered Arms0
A Unifying Theory of Thompson Sampling for Continuous Risk-Averse BanditsCode0
A relaxed technical assumption for posterior sampling-based reinforcement learning for control of unknown linear systems0
Scalable regret for learning to control network-coupled subsystems with unknown dynamics0
Batched Thompson Sampling for Multi-Armed Bandits0
Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models0
Debiasing Samples from Online Learning Using Bootstrap0
Adaptively Optimize Content Recommendation Using Multi Armed Bandit Algorithms in E-commerce0
From Predictions to Decisions: The Importance of Joint Predictive Distributions0
Show:102550
← PrevPage 13 of 27Next →

No leaderboard results yet.