SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 221230 of 655 papers

TitleStatusHype
Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits0
Two-sided Competing Matching Recommendation Markets With Quota and Complementary Preferences ConstraintsCode0
Differentially Private Online Bayesian Estimation With Adaptive TruncationCode0
A Combinatorial Semi-Bandit Approach to Charging Station Selection for Electric Vehicles0
Thompson Sampling with Diffusion Generative Prior0
Reinforcement Learning in Credit Scoring and Underwriting0
Neural Bandits for Data Mining: Searching for Dangerous PolypharmacyCode0
Online Learning-based Waveform Selection for Improved Vehicle Recognition in Automotive Radar0
Monte Carlo Tree Search Algorithms for Risk-Aware and Multi-Objective Reinforcement Learning0
Thompson Sampling for High-Dimensional Sparse Linear Contextual BanditsCode0
Show:102550
← PrevPage 23 of 66Next →

No leaderboard results yet.