SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 2130 of 1262 papers

TitleStatusHype
Offline Neural Contextual Bandits: Pessimism, Optimization and GeneralizationCode1
EE-Net: Exploitation-Exploration Neural Networks in Contextual BanditsCode1
Generalized Linear Bandits with Local Differential PrivacyCode1
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual BanditsCode1
Deep Bandits Show-Off: Simple and Efficient Exploration with Deep NetworksCode1
Federated Multi-Armed BanditsCode1
An empirical evaluation of active inference in multi-armed banditsCode1
BanditPAM: Almost Linear Time k-Medoids Clustering via Multi-Armed BanditsCode1
Neural Thompson SamplingCode1
Carousel Personalization in Music Streaming Apps with Contextual BanditsCode1
Show:102550
← PrevPage 3 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified