SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 151160 of 655 papers

TitleStatusHype
Thompson Sampling for Stochastic Bandits with Noisy Contexts: An Information-Theoretic Regret Analysis0
Model-Free Approximate Bayesian Learning for Large-Scale Conversion Funnel Optimization0
Decentralized Multi-Agent Active Search and Tracking when Targets Outnumber Agents0
Improving sample efficiency of high dimensional Bayesian optimization with MCMC0
Zero-Inflated Bandits0
Finite-Time Frequentist Regret Bounds of Multi-Agent Thompson Sampling on Sparse HypergraphsCode0
Best Arm Identification in Batched Multi-armed Bandit Problems0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
RoME: A Robust Mixed-Effects Bandit Algorithm for Optimizing Mobile Health InterventionsCode0
Sample-based Dynamic Hierarchical Transformer with Layer and Head Flexibility via Contextual Bandit0
Show:102550
← PrevPage 16 of 66Next →

No leaderboard results yet.