SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 151175 of 655 papers

TitleStatusHype
Bandits Under The Influence (Extended Version)0
Analysis of Thompson Sampling for Partially Observable Contextual Multi-Armed Bandits0
Bandit Policies for Reliable Cellular Network Handovers in Extreme Mobility0
Bandit Models of Human Behavior: Reward Processing in Mental Disorders0
Analysis of Thompson Sampling for Graphical Bandits Without the Graphs0
Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits0
A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms0
Context in Public Health for Underserved Communities: A Bayesian Approach to Online Restless Bandits0
Bandit Learning for Diversified Interactive Recommendation0
Adaptive Rate of Convergence of Thompson Sampling for Gaussian Process Optimization0
Bandit Convex Optimization: sqrtT Regret in One Dimension0
Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control0
Analysis of Thompson Sampling for Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms0
Adaptive Experimentation at Scale: A Computational Framework for Flexible Batches0
BanditCAT and AutoIRT: Machine Learning Approaches to Computerized Adaptive Testing and Item Calibration0
Bag of Policies for Distributional Deep Exploration0
Analysis and Design of Thompson Sampling for Stochastic Partial Monitoring0
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning0
Automatic Ensemble Learning for Online Influence Maximization0
An Adversarial Analysis of Thompson Sampling for Full-information Online Learning: from Finite to Infinite Action Spaces0
Adaptive Data Augmentation for Thompson Sampling0
Achieving adaptivity and optimality for multi-armed bandits using Exponential-Kullback Leibler Maillard Sampling0
A Multi-Armed Bandit to Smartly Select a Training Set from Big Medical Data0
A Unified and Efficient Coordinating Framework for Autonomous DBMS Tuning0
Augmented RBMLE-UCB Approach for Adaptive Control of Linear Quadratic Systems0
Show:102550
← PrevPage 7 of 27Next →

No leaderboard results yet.