SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 151175 of 655 papers

TitleStatusHype
Chimera: A Hybrid Machine Learning Driven Multi-Objective Design Space Exploration Tool for FPGA High-Level Synthesis0
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
Bayesian Quantile and Expectile Optimisation0
Combinatorial Neural Bandits0
Combining Bayesian Optimization and Lipschitz Optimization0
Concurrent Decentralized Channel Allocation and Access Point Selection using Multi-Armed Bandits in multi BSS WLANs0
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret0
Connections Between Mirror Descent, Thompson Sampling and the Information Ratio0
Constrained Contextual Bandit Learning for Adaptive Radar Waveform Selection0
Constrained Thompson Sampling for Real-Time Electricity Pricing with Grid Reliability Constraints0
Constrained Thompson Sampling for Wireless Link Optimization0
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers0
A relaxed technical assumption for posterior sampling-based reinforcement learning for control of unknown linear systems0
Context Attentive Bandits: Contextual Bandit with Restricted Context0
Context Attribution with Multi-Armed Bandit Optimization0
Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling0
Contextual Bandits for Advertising Budget Allocation0
Contextual Bandits with Non-Stationary Correlated Rewards for User Association in MmWave Vehicular Networks0
Contextual Bandit with Herding Effects: Algorithms and Recommendation Applications0
Contextual Multi-armed Bandit Algorithm for Semiparametric Reward Model0
Contextual Multi-Armed Bandits for Causal Marketing0
Contextual Thompson Sampling via Generation of Missing Data0
An Information-Theoretic Analysis of Thompson Sampling for Logistic Bandits0
Show:102550
← PrevPage 7 of 27Next →

No leaderboard results yet.