SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 6170 of 655 papers

TitleStatusHype
Addressing Missing Data Issue for Diffusion-based RecommendationCode0
Cost-Efficient Online Decision Making: A Combinatorial Multi-Armed Bandit ApproachCode0
Differentially Private Online Bayesian Estimation With Adaptive TruncationCode0
Cascading Bandits for Large-Scale Recommendation ProblemsCode0
Adaptive Thompson Sampling Stacks for Memory Bounded Open-Loop PlanningCode0
Causal Bandits for Linear Structural Equation ModelsCode0
Bayesian Optimization for Categorical and Category-Specific Continuous InputsCode0
Bandit-Based Prompt Design Strategy Selection Improves Prompt OptimizersCode0
Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed BanditCode0
Bayesian Algorithms for Decentralized Stochastic BanditsCode0
Show:102550
← PrevPage 7 of 66Next →

No leaderboard results yet.