SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 561570 of 1262 papers

TitleStatusHype
Provably and Practically Efficient Neural Contextual Bandits0
Provable General Function Class Representation Learning in Multitask Bandits and MDPs0
Online Meta-Learning in Adversarial Multi-Armed Bandits0
Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets0
Optimistic Whittle Index Policy: Online Learning for Restless BanditsCode0
Federated Neural BanditsCode0
Meta-Learning Adversarial Bandits0
Lifting the Information Ratio: An Information-Theoretic Analysis of Thompson Sampling for Contextual Bandits0
Fairness and Welfare Quantification for Regret in Multi-Armed Bandits0
Exploration, Exploitation, and Engagement in Multi-Armed Bandits with Abandonment0
Show:102550
← PrevPage 57 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified