SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 311320 of 655 papers

TitleStatusHype
An improved regret analysis for UCB-N and TS-N0
Influencing Bandits: Arm Selection for Preference Shaping0
Combinatorial Neural Bandits0
Information Directed Sampling and Bandits with Heteroscedastic Noise0
Information Directed Sampling for Stochastic Bandits with Graph Feedback0
Information-Theoretic Confidence Bounds for Reinforcement Learning0
IntelligentPooling: Practical Thompson Sampling for mHealth0
Joint User Association and Pairing in Multi-UAV-Assisted NOMA Networks: A Decaying-Epsilon Thompson Sampling Framework0
KABB: Knowledge-Aware Bayesian Bandits for Dynamic Expert Coordination in Multi-Agent Systems0
KLUCB Approach to Copeland Bandits0
Show:102550
← PrevPage 32 of 66Next →

No leaderboard results yet.