SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 101110 of 655 papers

TitleStatusHype
DRL-based Joint Resource Scheduling of eMBB and URLLC in O-RAN0
Preferential Multi-Objective Bayesian Optimization0
Bayesian Bandit Algorithms with Approximate Inference in Stochastic Linear Bandits0
Joint User Association and Pairing in Multi-UAV-Assisted NOMA Networks: A Decaying-Epsilon Thompson Sampling Framework0
Memory Sequence Length of Data Sampling Impacts the Adaptation of Meta-Reinforcement Learning Agents0
More Efficient Randomized Exploration for Reinforcement Learning via Approximate SamplingCode0
Improving Reward-Conditioned Policies for Multi-Armed Bandits using Normalized Weight Functions0
Graph Neural Thompson Sampling0
A Federated Online Restless Bandit Framework for Cooperative Resource Allocation0
DISCO: An End-to-End Bandit Framework for Personalised Discount Allocation0
Show:102550
← PrevPage 11 of 66Next →

No leaderboard results yet.