SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 351400 of 655 papers

TitleStatusHype
Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning0
Feel-Good Thompson Sampling for Contextual Dueling Bandits0
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits0
First-Order Bayesian Regret Analysis of Thompson Sampling0
Fixed-Confidence Guarantees for Bayesian Best-Arm Identification0
Fourier Representations for Black-Box Optimization over Categorical Variables0
Freshness-Aware Thompson Sampling0
From Bandits Model to Deep Deterministic Policy Gradient, Reinforcement Learning with Contextual Information0
Fully Distributed Bayesian Optimization with Stochastic Policies0
Gaussian Process Thompson Sampling via Rootfinding0
Generalized Bayesian deep reinforcement learning0
Generalized Probabilistic Bisection for Stochastic Root-Finding0
Generalized Regret Analysis of Thompson Sampling using Fractional Posteriors0
Generalized Thompson Sampling for Contextual Bandits0
Generator-Mediated Bandits: Thompson Sampling for GenAI-Powered Adaptive Interventions0
Geometry-Aware Approaches for Balancing Performance and Theoretical Guarantees in Linear Bandits0
Graph Neural Thompson Sampling0
Feedback graph regret bounds for Thompson Sampling and UCB0
Greedy Bandits with Sampled Context0
Greedy k-Center from Noisy Distance Samples0
GuideBoot: Guided Bootstrap for Deep Contextual Bandits0
GUTS: Generalized Uncertainty-Aware Thompson Sampling for Multi-Agent Active Search0
gym-saturation: Gymnasium environments for saturation provers (System description)0
Hierarchical Bayesian Bandits0
High-dimensional near-optimal experiment design for drug discovery via Bayesian sparse sampling0
Horde of Bandits using Gaussian Markov Random Fields0
Human collective intelligence as distributed Bayesian inference0
Hypermodels for Exploration0
IBAC: An Intelligent Dynamic Bandwidth Channel Access Avoiding Outside Warning Range Problem0
Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning0
Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems0
Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration0
Improving Reward-Conditioned Policies for Multi-Armed Bandits using Normalized Weight Functions0
Improving sample efficiency of high dimensional Bayesian optimization with MCMC0
Improving Thompson Sampling via Information Relaxation for Budgeted Multi-armed Bandits0
Incentivized Exploration for Multi-Armed Bandits under Reward Drift0
Incentivizing Combinatorial Bandit Exploration0
Incentivizing Exploration with Linear Contexts and Combinatorial Actions0
Incorporating Behavioral Constraints in Online AI Systems0
Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits0
Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits0
In-Domain African Languages Translation Using LLMs and Multi-armed Bandits0
Influence Diagram Bandits: Variational Thompson Sampling for Structured Bandit Problems0
Influencing Bandits: Arm Selection for Preference Shaping0
Information Directed Sampling and Bandits with Heteroscedastic Noise0
Information Directed Sampling for Stochastic Bandits with Graph Feedback0
Information-Theoretic Confidence Bounds for Reinforcement Learning0
IntelligentPooling: Practical Thompson Sampling for mHealth0
Joint User Association and Pairing in Multi-UAV-Assisted NOMA Networks: A Decaying-Epsilon Thompson Sampling Framework0
KABB: Knowledge-Aware Bayesian Bandits for Dynamic Expert Coordination in Multi-Agent Systems0
Show:102550
← PrevPage 8 of 14Next →

No leaderboard results yet.