SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 251300 of 655 papers

TitleStatusHype
From Predictions to Decisions: The Importance of Joint Predictive Distributions0
Evaluation of Explore-Exploit Policies in Multi-result Ranking Systems0
Convergence Rates of Posterior Distributions in Markov Decision Process0
Expected Improvement-based Contextual Bandits0
A study of Thompson Sampling with Parameter h0
A Formal Solution to the Grain of Truth Problem0
AdaptEx: A Self-Service Contextual Bandit Platform0
Contextual Thompson Sampling via Generation of Missing Data0
Contextual Multi-Armed Bandits for Causal Marketing0
A Simple and Optimal Policy Design with Safety against Heavy-Tailed Risk for Stochastic Bandits0
Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model0
Contextual Bandit with Herding Effects: Algorithms and Recommendation Applications0
A sequential Monte Carlo approach to Thompson sampling for Bayesian optimization0
A Federated Online Restless Bandit Framework for Cooperative Resource Allocation0
Contextual Bandits with Non-Stationary Correlated Rewards for User Association in MmWave Vehicular Networks0
Contextual Bandits for Advertising Budget Allocation0
A resource-constrained stochastic scheduling algorithm for homeless street outreach and gleaning edible food0
Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling0
Context Attribution with Multi-Armed Bandit Optimization0
A Reliability-aware Multi-armed Bandit Approach to Learn and Select Users in Demand Response0
Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization0
Active Search for High Recall: a Non-Stationary Extension of Thompson Sampling0
Context Attentive Bandits: Contextual Bandit with Restricted Context0
A relaxed technical assumption for posterior sampling-based reinforcement learning for control of unknown linear systems0
Constrained Thompson Sampling for Wireless Link Optimization0
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers0
Constrained Thompson Sampling for Real-Time Electricity Pricing with Grid Reliability Constraints0
Constrained Contextual Bandit Learning for Adaptive Radar Waveform Selection0
Efficiently Tackling Million-Dimensional Multiobjective Problems: A Direction Sampling and Fine-Tuning Approach0
Connections Between Mirror Descent, Thompson Sampling and the Information Ratio0
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
A Distributed Neural Linear Thompson Sampling Framework to Achieve URLLC in Industrial IoT0
Active Reinforcement Learning with Monte-Carlo Tree Search0
Accelerating Grasp Exploration by Leveraging Learned Priors0
Concurrent Decentralized Channel Allocation and Access Point Selection using Multi-Armed Bandits in multi BSS WLANs0
Combining Bayesian Optimization and Lipschitz Optimization0
A Practical Method for Solving Contextual Bandit Problems Using Decision Trees0
Combinatorial Neural Bandits0
Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms: A Case with Bounded Regret0
Adaptive Experimentation in the Presence of Exogenous Nonstationary Variation0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Approximate Thompson Sampling for Learning Linear Quadratic Regulators with O(T) Regret0
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff0
Chimera: A Hybrid Machine Learning Driven Multi-Objective Design Space Exploration Tool for FPGA High-Level Synthesis0
Approximate information for efficient exploration-exploitation strategies0
Fast Change Identification in Multi-Play Bandits and its Applications in Wireless Networks0
Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Show:102550
← PrevPage 6 of 14Next →

No leaderboard results yet.