SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 126150 of 655 papers

TitleStatusHype
Bayesian Learning of Optimal Policies in Markov Decision Processes with Countably Infinite State-Space0
Adaptive Operator Selection Based on Dynamic Thompson Sampling for MOEA/D0
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits0
A Quantile-based Approach for Hyperparameter Transfer Learning0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
A Nonparametric Contextual Bandit with Arm-level Eligibility Control for Customer Service Routing0
An Online Learning Framework for Energy-Efficient Navigation of Electric Vehicles0
Adaptive Model Selection Framework: An Application to Airline Pricing0
Belief Flows of Robust Online Learning0
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems0
An Information-Theoretic Analysis of Thompson Sampling with Infinite Action Spaces0
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems0
Best Arm Identification in Batched Multi-armed Bandit Problems0
Active RLHF via Best Policy Learning from Trajectory Preference Feedback0
Better Optimism By Bayes: Adaptive Planning with Rich Models0
Blind Exploration and Exploitation of Stochastic Experts0
Bootstrapped Thompson Sampling and Deep Exploration0
BOTS: Batch Bayesian Optimization of Extended Thompson Sampling for Severely Episode-Limited RL Settings0
Calibrated Fairness in Bandits0
A Note on Information-Directed Sampling and Thompson Sampling0
An Unbiased Data Collection and Content Exploitation/Exploration Strategy for Personalization0
Causal Bandits without prior knowledge using separating sets0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Bayesian Quantile and Expectile Optimisation0
Show:102550
← PrevPage 6 of 27Next →

No leaderboard results yet.