SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 4150 of 655 papers

TitleStatusHype
Generator-Mediated Bandits: Thompson Sampling for GenAI-Powered Adaptive Interventions0
In-Domain African Languages Translation Using LLMs and Multi-armed Bandits0
Dynamic Decision-Making under Model Misspecification0
Addressing Missing Data Issue for Diffusion-based RecommendationCode0
Thompson Sampling-like Algorithms for Stochastic Rising Bandits0
Leveraging Offline Data from Similar Systems for Online Linear Quadratic Control0
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret0
Bayesian learning of the optimal action-value function in a Markov decision process0
Neural Contextual Bandits Under Delayed Feedback Constraints0
Counterfactual Inference under Thompson Sampling0
Show:102550
← PrevPage 5 of 66Next →

No leaderboard results yet.