SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 861870 of 1262 papers

TitleStatusHype
Multi-Armed Bandits with Local Differential Privacy0
Linear Bandits with Limited Adaptivity and Learning Distributional Optimal Design0
Continuous-Time Multi-Armed Bandits with Controlled Restarts0
Offline Contextual Bandits with Overparameterized ModelsCode0
Online learning with Corrupted context: Corrupted Contextual Bandits0
Approximating a Target Distribution using Weight QueriesCode0
Adaptive Discretization against an Adversary: Lipschitz bandits, Dynamic Pricing, and Auction Tuning0
Towards Tractable Optimism in Model-Based Reinforcement Learning0
Open Problem: Model Selection for Contextual Bandits0
Learning by Repetition: Stochastic Multi-armed Bandits under Priming Effect0
Show:102550
← PrevPage 87 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified