SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 391400 of 655 papers

TitleStatusHype
Sub-sampling for Efficient Non-Parametric Bandit ExplorationCode0
Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration0
Bayesian Algorithms for Decentralized Stochastic BanditsCode0
Reinforcement Learning for Efficient and Tuning-Free Link Adaptation0
Double-Linear Thompson Sampling for Context-Attentive Bandits0
Asynchronous ε-Greedy Bayesian OptimisationCode0
Online Learning and Distributed Control for Residential Demand Response0
Effects of Model Misspecification on Bayesian Bandits: Case Studies in UX Optimization0
Stage-wise Conservative Linear Bandits0
Neural Model-based Optimization with Right-Censored Observations0
Show:102550
← PrevPage 40 of 66Next →

No leaderboard results yet.