SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 451475 of 655 papers

TitleStatusHype
Non-Stationary Bandit Learning via Predictive Sampling0
Non-Stationary Dynamic Pricing Via Actor-Critic Information-Directed Pricing0
Non-Stationary Latent Bandits0
No Regrets for Learning the Prior in Bandits0
Observation-Free Attacks on Stochastic Bandits0
On Adaptive Estimation for Dynamic Bernoulli Bandits0
On Batch Bayesian Optimization0
On Dynamic Pricing with Covariates0
On Efficiency in Hierarchical Reinforcement Learning0
On Improved Regret Bounds In Bayesian Optimization with Gaussian Noise0
On Kernelized Multi-Armed Bandits with Constraints0
On learning Whittle index policy for restless bandits with scalable regret0
Online Algorithms For Parameter Mean And Variance Estimation In Dynamic Regression Models0
Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits0
Online Causal Inference for Advertising in Real-Time Bidding Auctions0
Online Learning and Distributed Control for Residential Demand Response0
Online Learning-based Waveform Selection for Improved Vehicle Recognition in Automotive Radar0
Online Learning of Energy Consumption for Navigation of Electric Vehicles0
Online Learning of Network Bottlenecks via Minimax Paths0
Online Residential Demand Response via Contextual Multi-Armed Bandits0
Only Pay for What Is Uncertain: Variance-Adaptive Thompson Sampling0
On Multi-Armed Bandit Designs for Dose-Finding Clinical Trials0
On Online Learning in Kernelized Markov Decision Processes0
On The Differential Privacy of Thompson Sampling With Gaussian Prior0
On the Importance of Uncertainty in Decision-Making with Large Language Models0
Show:102550
← PrevPage 19 of 27Next →

No leaderboard results yet.