SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 376400 of 655 papers

TitleStatusHype
Prior-free and prior-dependent regret bounds for Thompson Sampling0
Probabilistic Inference in Reinforcement Learning Done Right0
Profitable Bandits0
QoS-Aware Multi-Armed Bandits0
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors0
Random Effect Bandits0
Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization0
Randomised Bayesian Least-Squares Policy Iteration0
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning0
Regenerative Particle Thompson Sampling0
Regret Analysis of Bandit Problems with Causal Background Knowledge0
Regret Analysis of the Finite-Horizon Gittins Index Strategy for Multi-Armed Bandits0
Regret Bounds for Information-Directed Reinforcement Learning0
Regularized-OFU: an efficient algorithm for general contextual bandit with optimization oracles0
Reinforcement Learning for Efficient and Tuning-Free Link Adaptation0
Reinforcement learning techniques for Outer Loop Link Adaptation in 4G/5G systems0
Reinforcement Learning with Subspaces using Free Energy Paradigm0
Reinforcement Learning with Trajectory Feedback0
Remote Contextual Bandits0
Residual Bootstrap Exploration for Bandit Algorithms0
Revised Progressive-Hedging-Algorithm Based Two-layer Solution Scheme for Bayesian Reinforcement Learning0
Reward Biased Maximum Likelihood Estimation for Reinforcement Learning0
Risk and optimal policies in bandit experiments0
Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs0
Risk-Constrained Thompson Sampling for CVaR Bandits0
Show:102550
← PrevPage 16 of 27Next →

No leaderboard results yet.