SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 621630 of 655 papers

TitleStatusHype
Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning ApproachCode0
Asynchronous Parallel Bayesian Optimisation via Thompson SamplingCode0
Dynamic Assortment Selection and Pricing with Censored Preference FeedbackCode0
Addressing Missing Data Issue for Diffusion-based RecommendationCode0
Asynchronous ε-Greedy Bayesian OptimisationCode0
Bayesian Non-stationary Linear Bandits for Large-Scale Recommender SystemsCode0
Bayesian bandits: balancing the exploration-exploitation tradeoff via double samplingCode0
Information-Directed Exploration for Deep Reinforcement LearningCode0
VITS : Variational Inference Thompson Sampling for contextual banditsCode0
Representative Action Selection for Large Action-Space Meta-BanditsCode0
Show:102550
← PrevPage 63 of 66Next →

No leaderboard results yet.