SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 371380 of 655 papers

TitleStatusHype
Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds0
Posterior sampling for reinforcement learning: worst-case regret bounds0
Posterior Sampling via Autoregressive Generation0
Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection0
Preferential Multi-Objective Bayesian Optimization0
Prior-free and prior-dependent regret bounds for Thompson Sampling0
Probabilistic Inference in Reinforcement Learning Done Right0
Profitable Bandits0
QoS-Aware Multi-Armed Bandits0
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors0
Show:102550
← PrevPage 38 of 66Next →

No leaderboard results yet.