SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 531540 of 1262 papers

TitleStatusHype
Raising Student Completion Rates with Adaptive Curriculum and Contextual Bandits0
Towards Soft Fairness in Restless Multi-Armed Bandits0
SPRT-based Efficient Best Arm Identification in Stochastic Bandits0
Online Learning with Off-Policy Feedback0
Parallel Best Arm Identification in Heterogeneous Environments0
Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action SpacesCode0
Contextual Bandits with Large Action Spaces: Made PracticalCode0
Online SuBmodular + SuPermodular (BP) Maximization with Bandit FeedbackCode0
Model Selection in Reinforcement Learning with General Function Approximations0
Instance-optimal PAC Algorithms for Contextual Bandits0
Show:102550
← PrevPage 54 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified