SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 271280 of 655 papers

TitleStatusHype
Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization0
Active Search for High Recall: a Non-Stationary Extension of Thompson Sampling0
Context Attentive Bandits: Contextual Bandit with Restricted Context0
A relaxed technical assumption for posterior sampling-based reinforcement learning for control of unknown linear systems0
Constrained Thompson Sampling for Wireless Link Optimization0
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers0
Constrained Thompson Sampling for Real-Time Electricity Pricing with Grid Reliability Constraints0
Constrained Contextual Bandit Learning for Adaptive Radar Waveform Selection0
Efficiently Tackling Million-Dimensional Multiobjective Problems: A Direction Sampling and Fine-Tuning Approach0
Connections Between Mirror Descent, Thompson Sampling and the Information Ratio0
Show:102550
← PrevPage 28 of 66Next →

No leaderboard results yet.