SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 351375 of 655 papers

TitleStatusHype
Fixed-Confidence Guarantees for Bayesian Best-Arm Identification0
Fourier Representations for Black-Box Optimization over Categorical Variables0
Freshness-Aware Thompson Sampling0
From Bandits Model to Deep Deterministic Policy Gradient, Reinforcement Learning with Contextual Information0
Fully Distributed Bayesian Optimization with Stochastic Policies0
Gaussian Process Thompson Sampling via Rootfinding0
Generalized Bayesian deep reinforcement learning0
Generalized Probabilistic Bisection for Stochastic Root-Finding0
Generalized Regret Analysis of Thompson Sampling using Fractional Posteriors0
Generalized Thompson Sampling for Contextual Bandits0
Generator-Mediated Bandits: Thompson Sampling for GenAI-Powered Adaptive Interventions0
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits0
Graph Neural Thompson Sampling0
Feedback graph regret bounds for Thompson Sampling and UCB0
Greedy Bandits with Sampled Context0
Greedy k-Center from Noisy Distance Samples0
GuideBoot: Guided Bootstrap for Deep Contextual Bandits0
GUTS: Generalized Uncertainty-Aware Thompson Sampling for Multi-Agent Active Search0
gym-saturation: Gymnasium environments for saturation provers (System description)0
Hierarchical Bayesian Bandits0
High-dimensional near-optimal experiment design for drug discovery via Bayesian sparse sampling0
Horde of Bandits using Gaussian Markov Random Fields0
Human collective intelligence as distributed Bayesian inference0
Hypermodels for Exploration0
IBAC: An Intelligent Dynamic Bandwidth Channel Access Avoiding Outside Warning Range Problem0
Show:102550
← PrevPage 15 of 27Next →

No leaderboard results yet.