SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 131140 of 655 papers

TitleStatusHype
Model-Free Approximate Bayesian Learning for Large-Scale Conversion Funnel Optimization0
Decentralized Multi-Agent Active Search and Tracking when Targets Outnumber Agents0
Improving sample efficiency of high dimensional Bayesian optimization with MCMC0
Adaptive Anytime Multi-Agent Path Finding Using Bandit-Based Large Neighborhood SearchCode1
Zero-Inflated Bandits0
Finite-Time Frequentist Regret Bounds of Multi-Agent Thompson Sampling on Sparse HypergraphsCode0
Best Arm Identification in Batched Multi-armed Bandit Problems0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health InterventionsCode0
Sample-based Dynamic Hierarchical Transformer with Layer and Head Flexibility via Contextual Bandit0
Show:102550
← PrevPage 14 of 66Next →

No leaderboard results yet.