SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 251275 of 655 papers

TitleStatusHype
Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs0
Analysis of Thompson Sampling for Controlling Unknown Linear Diffusion Processes0
Thompson Sampling for (Combinatorial) Pure Exploration0
Thompson Sampling for Robust Transfer in Multi-Task BanditsCode0
Thompson Sampling Achieves O(T) Regret in Linear Quadratic Control0
A Contextual Combinatorial Semi-Bandit Approach to Network Bottleneck Identification0
On Provably Robust Meta-Bayesian OptimizationCode0
Top Two Algorithms Revisited0
Regret Bounds for Information-Directed Reinforcement Learning0
A Simple and Optimal Policy Design with Safety against Heavy-Tailed Risk for Stochastic Bandits0
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits0
Bandit Theory and Thompson Sampling-Guided Directed Evolution for Sequence Optimization0
Incentivizing Combinatorial Bandit Exploration0
Mixed-Effect Thompson SamplingCode0
Lifting the Information Ratio: An Information-Theoretic Analysis of Thompson Sampling for Contextual Bandits0
Surrogate modeling for Bayesian optimization beyond a single Gaussian process0
Information-Directed Selection for Top-Two AlgorithmsCode0
Fast Change Identification in Multi-Play Bandits and its Applications in Wireless Networks0
Semi-Parametric Contextual Bandits with Graph-Laplacian Regularization0
Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization0
Non-Stationary Bandit Learning via Predictive Sampling0
Evolutionary Multi-Armed Bandits with Genetic Thompson SamplingCode0
Thompson Sampling for Bandit Learning in Matching MarketsCode0
On Kernelized Multi-Armed Bandits with Constraints0
Multi-armed bandits for resource efficient, online optimization of language model pre-training: the use case of dynamic maskingCode0
Show:102550
← PrevPage 11 of 27Next →

No leaderboard results yet.