SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 581590 of 655 papers

TitleStatusHype
A Nonparametric Contextual Bandit with Arm-level Eligibility Control for Customer Service Routing0
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits0
A Note on Information-Directed Sampling and Thompson Sampling0
An Unbiased Data Collection and Content Exploitation/Exploration Strategy for Personalization0
Apple Tasting Revisited: Bayesian Approaches to Partially Monitored Online Binary Classification0
Approximate information for efficient exploration-exploitation strategies0
Approximate Thompson Sampling for Learning Linear Quadratic Regulators with O(T) Regret0
A Practical Method for Solving Contextual Bandit Problems Using Decision Trees0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
Efficiently Tackling Million-Dimensional Multiobjective Problems: A Direction Sampling and Fine-Tuning Approach0
Show:102550
← PrevPage 59 of 66Next →

No leaderboard results yet.