SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 581590 of 1262 papers

TitleStatusHype
Slowly Changing Adversarial Bandit Algorithms are Efficient for Discounted MDPs0
Semi-Parametric Contextual Bandits with Graph-Laplacian Regularization0
From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses0
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions0
A Survey of Risk-Aware Multi-Armed Bandits0
Federated Multi-Armed Bandits Under Byzantine Attacks0
Selectively Contextual Bandits0
Multi-Player Multi-Armed Bandits with Finite Shareable Resources Arms: Learning Algorithms & Applications0
Thompson Sampling for Bandit Learning in Matching MarketsCode0
Evolutionary Multi-Armed Bandits with Genetic Thompson SamplingCode0
Show:102550
← PrevPage 59 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified