SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 361370 of 1262 papers

TitleStatusHype
A Convex Framework for Confounding Robust InferenceCode0
Task Selection and Assignment for Multi-modal Multi-task Dialogue Act Classification with Non-stationary Multi-armed Bandits0
Wasserstein Distributionally Robust Policy Evaluation and Learning for Contextual Bandits0
Doubly High-Dimensional Contextual Bandits: An Interpretable Model for Joint Assortment-Pricing0
The Best Arm Evades: Near-optimal Multi-pass Streaming Lower Bounds for Pure Exploration in Multi-armed Bandits0
Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits0
Concentrated Differential Privacy for Bandits0
Pure Exploration under Mediators' Feedback0
Stochastic Graph Bandit Learning with Side-Observations0
Learning How to Price Charging in Electric Ride-Hailing Markets0
Show:102550
← PrevPage 37 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified