SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 8190 of 1262 papers

TitleStatusHype
(Almost) Free Incentivized Exploration from Decentralized Learning AgentsCode0
Confidence Intervals for Policy Evaluation in Adaptive ExperimentsCode0
Adaptive Estimator Selection for Off-Policy EvaluationCode0
Cascading Bandits for Large-Scale Recommendation ProblemsCode0
Adaptive Experimentation with Delayed Binary FeedbackCode0
Contextual bandits with entropy-based human feedbackCode0
A Convex Framework for Confounding Robust InferenceCode0
Corralling a Band of Bandit AlgorithmsCode0
Scalable Exploration via Ensemble++Code0
Causal Contextual Bandits with Adaptive ContextCode0
Show:102550
← PrevPage 9 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified