SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 161170 of 655 papers

TitleStatusHype
Bandit Convex Optimization: sqrtT Regret in One Dimension0
Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control0
Analysis of Thompson Sampling for Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms0
Adaptive Experimentation at Scale: A Computational Framework for Flexible Batches0
BanditCAT and AutoIRT: Machine Learning Approaches to Computerized Adaptive Testing and Item Calibration0
Bag of Policies for Distributional Deep Exploration0
Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring0
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning0
Automatic Ensemble Learning for Online Influence Maximization0
An Adversarial Analysis of Thompson Sampling for Full-information Online Learning: from Finite to Infinite Action Spaces0
Show:102550
← PrevPage 17 of 66Next →

No leaderboard results yet.