SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 3140 of 655 papers

TitleStatusHype
A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms0
Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits0
Context in Public Health for Underserved Communities: A Bayesian Approach to Online Restless Bandits0
Aging Bandits: Regret Analysis and Order-Optimal Learning Algorithm for Wireless Networks with Stochastic Arrivals0
Adaptive Experimentation at Scale: A Computational Framework for Flexible Batches0
Adaptive Data Augmentation for Thompson Sampling0
Achieving adaptivity and optimality for multi-armed bandits using Exponential-Kullback Leibler Maillard Sampling0
Adaptive Combinatorial Allocation0
A Change-Detection Based Thompson Sampling Framework for Non-Stationary Bandits0
A Batched Multi-Armed Bandit Approach to News Headline Testing0
Show:102550
← PrevPage 4 of 66Next →

No leaderboard results yet.