SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 431440 of 655 papers

TitleStatusHype
Model-based Meta Reinforcement Learning using Graph Structured Surrogate Models0
Model-Free Approximate Bayesian Learning for Large-Scale Conversion Funnel Optimization0
Modified Meta-Thompson Sampling for Linear Bandits and Its Bayes Regret Analysis0
Module-wise Adaptive Distillation for Multimodality Foundation Models0
Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning0
Monte-Carlo tree search with uncertainty propagation via optimal transport0
MOTS: Minimax Optimal Thompson Sampling0
Multi-Agent Active Search using Detection and Location Uncertainty0
Multi-armed Bandit Algorithms on System-on-Chip: Go Frequentist or Bayesian?0
Multi-Armed Bandit Strategies for Non-Stationary Reward Distributions and Delayed Feedback Processes0
Show:102550
← PrevPage 44 of 66Next →

No leaderboard results yet.