SOTAVerified

Thompson Sampling

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Papers

Showing 281290 of 655 papers

TitleStatusHype
Connecting Thompson Sampling and UCB: Towards More Efficient Trade-offs Between Privacy and Regret0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
A Distributed Neural Linear Thompson Sampling Framework to Achieve URLLC in Industrial IoT0
Active Reinforcement Learning with Monte-Carlo Tree Search0
Accelerating Grasp Exploration by Leveraging Learned Priors0
Concurrent Decentralized Channel Allocation and Access Point Selection using Multi-Armed Bandits in multi BSS WLANs0
Combining Bayesian Optimization and Lipschitz Optimization0
A Practical Method for Solving Contextual Bandit Problems Using Decision Trees0
Combinatorial Neural Bandits0
Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms: A Case with Bounded Regret0
Show:102550
← PrevPage 29 of 66Next →

No leaderboard results yet.