SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 101125 of 655 papers

TitleStatusHype
Smart Routing with Precise Link Estimation: DSEE-Based Anypath Routing for Reliable Wireless Networking0
Analyzing and Enhancing Queue Sampling for Energy-Efficient Remote Control of Bandits0
Thompson Sampling for Infinite-Horizon Discounted Decision Processes0
Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed BanditCode0
Efficient and Adaptive Posterior Sampling Algorithms for Bandits0
Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation0
Bayesian-Guided Generation of Synthetic Microbiomes with Minimized Pathogenicity0
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning0
Feel-Good Thompson Sampling for Contextual Dueling Bandits0
Online Learning of Decision Trees with Thompson SamplingCode0
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers0
On the Importance of Uncertainty in Decision-Making with Large Language Models0
Meta Learning in Bandits within Shared Affine Subspaces0
A resource-constrained stochastic scheduling algorithm for homeless street outreach and gleaning edible food0
Cramming Contextual Bandits for On-policy Statistical Evaluation0
ε-Neural Thompson Sampling of Deep Brain Stimulation for Parkinson Disease Treatment0
TS-RSR: A provably efficient approach for batch Bayesian Optimization0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Epsilon-Greedy Thompson Sampling to Bayesian Optimization0
Influencing Bandits: Arm Selection for Preference Shaping0
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits0
Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification0
Thompson Sampling in Partially Observable Contextual Bandits0
Diffusion Models Meet Contextual Bandits with Large Action Spaces0
Tree Ensembles for Contextual Bandits0
Show:102550
← PrevPage 5 of 27Next →

No leaderboard results yet.