SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 311320 of 655 papers

TitleStatusHype
Improving Reward-Conditioned Policies for Multi-Armed Bandits using Normalized Weight Functions0
Influencing Bandits: Arm Selection for Preference Shaping0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Information Directed Sampling and Bandits with Heteroscedastic Noise0
Information Directed Sampling for Stochastic Bandits with Graph Feedback0
Information-Theoretic Confidence Bounds for Reinforcement Learning0
IntelligentPooling: Practical Thompson Sampling for mHealth0
Joint User Association and Pairing in Multi-UAV-Assisted NOMA Networks: A Decaying-Epsilon Thompson Sampling Framework0
Apple Tasting Revisited: Bayesian Approaches to Partially Monitored Online Binary Classification0
Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration0
Show:102550
← PrevPage 32 of 66Next →

No leaderboard results yet.