SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 501525 of 655 papers

TitleStatusHype
Policy Gradient Optimization of Thompson Sampling Policies0
Position-Based Multiple-Play Bandits with Thompson Sampling0
Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds0
Posterior sampling for reinforcement learning: worst-case regret bounds0
Posterior Sampling via Autoregressive Generation0
Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection0
Preferential Multi-Objective Bayesian Optimization0
Prior-free and prior-dependent regret bounds for Thompson Sampling0
Probabilistic Inference in Reinforcement Learning Done Right0
Profitable Bandits0
QoS-Aware Multi-Armed Bandits0
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors0
Random Effect Bandits0
Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization0
Randomised Bayesian Least-Squares Policy Iteration0
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning0
Regenerative Particle Thompson Sampling0
Regret Analysis of Bandit Problems with Causal Background Knowledge0
Regret Analysis of the Finite-Horizon Gittins Index Strategy for Multi-Armed Bandits0
Regret Bounds for Information-Directed Reinforcement Learning0
Regularized-OFU: an efficient algorithm for general contextual bandit with optimization oracles0
Reinforcement Learning for Efficient and Tuning-Free Link Adaptation0
Reinforcement learning techniques for Outer Loop Link Adaptation in 4G/5G systems0
Reinforcement Learning with Subspaces using Free Energy Paradigm0
Reinforcement Learning with Trajectory Feedback0
Show:102550
← PrevPage 21 of 27Next →

No leaderboard results yet.