SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 351400 of 655 papers

TitleStatusHype
Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits0
Optimal Learning for Dynamic Coding in Deadline-Constrained Multi-Channel Networks0
Optimal No-regret Learning in Repeated First-price Auctions0
Optimal Recommendation to Users that React: Online Learning for a Class of POMDPs0
Optimistic posterior sampling for reinforcement learning: worst-case regret bounds0
Optimistic Thompson Sampling for No-Regret Learning in Unknown Games0
Optimization of a SSP's Header Bidding Strategy using Thompson Sampling0
Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification0
Ordinal Bayesian Optimisation0
Parallel and Distributed Thompson Sampling for Large-scale Accelerated Exploration of Chemical Space0
Parallel Bayesian Optimization Using Satisficing Thompson Sampling for Time-Sensitive Black-Box Optimization0
Parallel Contextual Bandits in Wireless Handover Optimization0
Parallelizing Thompson Sampling0
Partial Likelihood Thompson Sampling0
Partially Observable Contextual Bandits with Linear Payoffs0
Partially Observable Online Change Detection via Smooth-Sparse Decomposition0
PG-TS: Improved Thompson Sampling for Logistic Contextual Bandits0
Planning and Learning in Risk-Aware Restless Multi-Arm Bandit Problem0
Policy Gradient Optimization of Thompson Sampling Policies0
Position-Based Multiple-Play Bandits with Thompson Sampling0
Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds0
Posterior sampling for reinforcement learning: worst-case regret bounds0
Posterior Sampling via Autoregressive Generation0
Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection0
Preferential Multi-Objective Bayesian Optimization0
Prior-free and prior-dependent regret bounds for Thompson Sampling0
Probabilistic Inference in Reinforcement Learning Done Right0
Profitable Bandits0
QoS-Aware Multi-Armed Bandits0
Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors0
Random Effect Bandits0
Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization0
Randomised Bayesian Least-Squares Policy Iteration0
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning0
Regenerative Particle Thompson Sampling0
Regret Analysis of Bandit Problems with Causal Background Knowledge0
Regret Analysis of the Finite-Horizon Gittins Index Strategy for Multi-Armed Bandits0
Regret Bounds for Information-Directed Reinforcement Learning0
Regularized-OFU: an efficient algorithm for general contextual bandit with optimization oracles0
Reinforcement Learning for Efficient and Tuning-Free Link Adaptation0
Reinforcement learning techniques for Outer Loop Link Adaptation in 4G/5G systems0
Reinforcement Learning with Subspaces using Free Energy Paradigm0
Reinforcement Learning with Trajectory Feedback0
Remote Contextual Bandits0
Residual Bootstrap Exploration for Bandit Algorithms0
Revised Progressive-Hedging-Algorithm Based Two-layer Solution Scheme for Bayesian Reinforcement Learning0
Reward Biased Maximum Likelihood Estimation for Reinforcement Learning0
Risk and optimal policies in bandit experiments0
Risk-averse Contextual Multi-armed Bandit Problem with Linear Payoffs0
Risk-Constrained Thompson Sampling for CVaR Bandits0
Show:102550
← PrevPage 8 of 14Next →

No leaderboard results yet.