SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 276300 of 655 papers

TitleStatusHype
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers0
Constrained Thompson Sampling for Real-Time Electricity Pricing with Grid Reliability Constraints0
Constrained Contextual Bandit Learning for Adaptive Radar Waveform Selection0
Efficiently Tackling Million-Dimensional Multiobjective Problems: A Direction Sampling and Fine-Tuning Approach0
Connections Between Mirror Descent, Thompson Sampling and the Information Ratio0
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
A Distributed Neural Linear Thompson Sampling Framework to Achieve URLLC in Industrial IoT0
Active Reinforcement Learning with Monte-Carlo Tree Search0
Accelerating Grasp Exploration by Leveraging Learned Priors0
Concurrent Decentralized Channel Allocation and Access Point Selection using Multi-Armed Bandits in multi BSS WLANs0
Combining Bayesian Optimization and Lipschitz Optimization0
A Practical Method for Solving Contextual Bandit Problems Using Decision Trees0
Combinatorial Neural Bandits0
Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms: A Case with Bounded Regret0
Adaptive Experimentation in the Presence of Exogenous Nonstationary Variation0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Approximate Thompson Sampling for Learning Linear Quadratic Regulators with O(T) Regret0
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff0
Chimera: A Hybrid Machine Learning Driven Multi-Objective Design Space Exploration Tool for FPGA High-Level Synthesis0
Approximate information for efficient exploration-exploitation strategies0
Fast Change Identification in Multi-Play Bandits and its Applications in Wireless Networks0
Challenges in Statistical Analysis of Data Collected by a Bandit Algorithm: An Empirical Exploration in Applications to Adaptively Randomized Experiments0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Show:102550
← PrevPage 12 of 27Next →

No leaderboard results yet.