SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 101150 of 655 papers

TitleStatusHype
A Multi-Armed Bandit to Smartly Select a Training Set from Big Medical Data0
Adaptive Combinatorial Allocation0
Automatic Ensemble Learning for Online Influence Maximization0
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning0
Bag of Policies for Distributional Deep Exploration0
BanditCAT and AutoIRT: Machine Learning Approaches to Computerized Adaptive Testing and Item Calibration0
Bandit Change-Point Detection for Real-Time Monitoring High-Dimensional Data Under Sampling Control0
Bandit Convex Optimization: sqrtT Regret in One Dimension0
Bandit Learning for Diversified Interactive Recommendation0
Adaptive Rate of Convergence of Thompson Sampling for Gaussian Process Optimization0
Bandit Models of Human Behavior: Reward Processing in Mental Disorders0
Bandit Policies for Reliable Cellular Network Handovers in Extreme Mobility0
Bandits Under The Influence (Extended Version)0
Bandit Theory and Thompson Sampling-Guided Directed Evolution for Sequence Optimization0
Batch Bayesian Optimization for Replicable Experimental Design0
Adaptive Sensor Placement for Continuous Spaces0
Batched Thompson Sampling0
Batched Thompson Sampling for Multi-Armed Bandits0
An Arm-Wise Randomization Approach to Combinatorial Linear Semi-Bandits0
Bayesian Bandit Algorithms with Approximate Inference in Stochastic Linear Bandits0
An Efficient Algorithm For Generalized Linear Bandit: Online Stochastic Gradient Descent and Thompson Sampling0
Bayesian Best-Arm Identification for Selecting Influenza Mitigation Strategies0
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff0
Bayesian decision-making under misspecified priors with applications to meta-learning0
Bayesian-Guided Generation of Synthetic Microbiomes with Minimized Pathogenicity0
Bayesian Learning of Optimal Policies in Markov Decision Processes with Countably Infinite State-Space0
Adaptive Operator Selection Based on Dynamic Thompson Sampling for MOEA/D0
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits0
A Quantile-based Approach for Hyperparameter Transfer Learning0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
A Nonparametric Contextual Bandit with Arm-level Eligibility Control for Customer Service Routing0
An Online Learning Framework for Energy-Efficient Navigation of Electric Vehicles0
Adaptive Model Selection Framework: An Application to Airline Pricing0
Belief Flows of Robust Online Learning0
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems0
An Information-Theoretic Analysis of Thompson Sampling with Infinite Action Spaces0
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems0
Best Arm Identification in Batched Multi-armed Bandit Problems0
Active RLHF via Best Policy Learning from Trajectory Preference Feedback0
Better Optimism By Bayes: Adaptive Planning with Rich Models0
Blind Exploration and Exploitation of Stochastic Experts0
Bootstrapped Thompson Sampling and Deep Exploration0
BOTS: Batch Bayesian Optimization of Extended Thompson Sampling for Severely Episode-Limited RL Settings0
Calibrated Fairness in Bandits0
A Note on Information-Directed Sampling and Thompson Sampling0
An Unbiased Data Collection and Content Exploitation/Exploration Strategy for Personalization0
Causal Bandits without prior knowledge using separating sets0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Bayesian Quantile and Expectile Optimisation0
Show:102550
← PrevPage 3 of 14Next →

No leaderboard results yet.