SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 326350 of 655 papers

TitleStatusHype
GuideBoot: Guided Bootstrap for Deep Contextual Bandits0
No Regrets for Learning the Prior in Bandits0
Metalearning Linear Bandits by Prior Update0
Bayesian decision-making under misspecified priors with applications to meta-learning0
Markov Decision Process modeled with Bandits for Sequential Decision Making in Linear-flow0
Random Effect Bandits0
Thompson Sampling for Unimodal Bandits0
Thompson Sampling with a Mixture Prior0
Multi-armed Bandit Algorithms on System-on-Chip: Go Frequentist or Bayesian?0
A Closer Look at the Worst-case Behavior of Multi-armed Bandit Algorithms0
Parallelizing Thompson Sampling0
Kolmogorov-Smirnov Test-Based Actively-Adaptive Thompson Sampling for Non-Stationary Bandits0
Asymptotically Optimal Bandits under Weighted Information0
Diffusion Approximations for Thompson Sampling0
Thompson Sampling for Gaussian Entropic Risk Bandits0
Deep Bandits Show-Off: Simple and Efficient Exploration with Deep NetworksCode1
Dynamic Slate Recommendation with Gated Recurrent Units and Thompson SamplingCode1
High-dimensional near-optimal experiment design for drug discovery via Bayesian sparse sampling0
When and Whom to Collaborate with in a Changing Environment: A Collaborative Dynamic Bandit Solution0
Blind Exploration and Exploitation of Stochastic Experts0
Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments0
Constrained Contextual Bandit Learning for Adaptive Radar Waveform Selection0
Efficient Optimal Selection for Composited Advertising Creatives with Tree StructureCode0
Automated Creative Optimization for E-Commerce AdvertisingCode0
Online Multi-Armed Bandits with Adaptive Inference0
Show:102550
← PrevPage 14 of 27Next →

No leaderboard results yet.