SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 101150 of 655 papers

TitleStatusHype
DRL-based Joint Resource Scheduling of eMBB and URLLC in O-RAN0
Bayesian Bandit Algorithms with Approximate Inference in Stochastic Linear Bandits0
Preferential Multi-Objective Bayesian Optimization0
Joint User Association and Pairing in Multi-UAV-Assisted NOMA Networks: A Decaying-Epsilon Thompson Sampling Framework0
Memory Sequence Length of Data Sampling Impacts the Adaptation of Meta-Reinforcement Learning Agents0
More Efficient Randomized Exploration for Reinforcement Learning via Approximate SamplingCode0
Improving Reward-Conditioned Policies for Multi-Armed Bandits using Normalized Weight Functions0
Graph Neural Thompson Sampling0
A Federated Online Restless Bandit Framework for Cooperative Resource Allocation0
DISCO: An End-to-End Bandit Framework for Personalised Discount Allocation0
Two-Stage Resource Allocation in Reconfigurable Intelligent Surface Assisted Hybrid Networks via Multi-Player Bandits0
Adaptively Learning to Select-Rank in Online Platforms0
Speculative Decoding via Early-exiting for Faster LLM Inference with Thompson Sampling Control Mechanism0
Approximate Thompson Sampling for Learning Linear Quadratic Regulators with O(T) Regret0
Posterior Sampling via Autoregressive Generation0
Cost-efficient Knowledge-based Question Answering with Large Language Models0
On Bits and Bandits: Quantifying the Regret-Information Trade-offCode0
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff0
Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits0
No Algorithmic Collusion in Two-Player Blindfolded Game with Thompson Sampling0
Understanding the Training and Generalization of Pretrained Transformer for Sequential Decision Making0
Smart Routing with Precise Link Estimation: DSEE-Based Anypath Routing for Reliable Wireless Networking0
Analyzing and Enhancing Queue Sampling for Energy-Efficient Remote Control of Bandits0
Thompson Sampling for Infinite-Horizon Discounted Decision Processes0
Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed BanditCode0
Efficient and Adaptive Posterior Sampling Algorithms for Bandits0
Bayesian Optimization with LLM-Based Acquisition Functions for Natural Language Preference Elicitation0
Bayesian-Guided Generation of Synthetic Microbiomes with Minimized Pathogenicity0
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning0
Online Learning of Decision Trees with Thompson SamplingCode0
Feel-Good Thompson Sampling for Contextual Dueling Bandits0
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers0
On the Importance of Uncertainty in Decision-Making with Large Language Models0
Meta Learning in Bandits within Shared Affine Subspaces0
A resource-constrained stochastic scheduling algorithm for homeless street outreach and gleaning edible food0
ε-Neural Thompson Sampling of Deep Brain Stimulation for Parkinson Disease Treatment0
Cramming Contextual Bandits for On-policy Statistical Evaluation0
TS-RSR: A provably efficient approach for batch Bayesian Optimization0
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems0
Epsilon-Greedy Thompson Sampling to Bayesian Optimization0
Influencing Bandits: Arm Selection for Preference Shaping0
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits0
Optimizing Adaptive Experiments: A Unified Approach to Regret Minimization and Best-Arm Identification0
Thompson Sampling in Partially Observable Contextual Bandits0
Diffusion Models Meet Contextual Bandits with Large Action Spaces0
Tree Ensembles for Contextual Bandits0
Context in Public Health for Underserved Communities: A Bayesian Approach to Online Restless Bandits0
Optimistic Thompson Sampling for No-Regret Learning in Unknown Games0
Efficient Exploration for LLMs0
Accelerating Approximate Thompson Sampling with Underdamped Langevin Monte CarloCode0
Show:102550
← PrevPage 3 of 14Next →

No leaderboard results yet.