SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 501510 of 655 papers

TitleStatusHype
Policy Gradient Optimization of Thompson Sampling Policies0
Position-Based Multiple-Play Bandits with Thompson Sampling0
Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds0
Posterior sampling for reinforcement learning: worst-case regret bounds0
Posterior Sampling via Autoregressive Generation0
Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection0
Preferential Multi-Objective Bayesian Optimization0
Prior-free and prior-dependent regret bounds for Thompson Sampling0
Probabilistic Inference in Reinforcement Learning Done Right0
Profitable Bandits0
Show:102550
← PrevPage 51 of 66Next →

No leaderboard results yet.