SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 941950 of 1262 papers

TitleStatusHype
Streaming Algorithms for Stochastic Multi-armed Bandits0
Structured Linear Contextual Bandits: A Sharp and Geometric Smoothed Analysis0
Structured Reinforcement Learning for Delay-Optimal Data Transmission in Dense mmWave Networks0
Structure Matters: Dynamic Policy Gradient0
Sublinear Optimal Policy Value Estimation in Contextual Bandits0
Surrogate Objectives for Batch Policy Optimization in One-step Decision Making0
Survey Bandits with Regret Guarantees0
Taking a hint: How to leverage loss predictors in contextual bandits?0
Target Tracking for Contextual Bandits: Application to Demand Side Management0
Task Selection and Assignment for Multi-modal Multi-task Dialogue Act Classification with Non-stationary Multi-armed Bandits0
Show:102550
← PrevPage 95 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified