SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 291300 of 655 papers

TitleStatusHype
Bayesian Non-stationary Linear Bandits for Large-Scale Recommender SystemsCode0
Tsetlin Machine for Solving Contextual Bandit ProblemsCode0
Deep Hierarchy in Bandits0
Evaluating Deep Vs. Wide & Deep Learners As Contextual Bandits For Personalized Email Promo RecommendationsCode0
Optimal Regret Is Achievable with Bounded Approximate Inference Error: An Enhanced Bayesian Upper Confidence Bound FrameworkCode0
Modeling Human Exploration Through Resource-Rational Reinforcement LearningCode0
Augmented RBMLE-UCB Approach for Adaptive Control of Linear Quadratic Systems0
IBAC: An Intelligent Dynamic Bandwidth Channel Access Avoiding Outside Warning Range Problem0
On Dynamic Pricing with Covariates0
Algorithms for Adaptive Experiments that Trade-off Statistical Analysis with Reward: Combining Uniform Random Assignment and Reward Maximization0
Show:102550
← PrevPage 30 of 66Next →

No leaderboard results yet.