SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 451460 of 1262 papers

TitleStatusHype
Bandit Social Learning: Exploration under Myopic Behavior0
Adversarial Rewards in Universal Learning for Contextual Bandits0
Piecewise-Stationary Multi-Objective Multi-Armed Bandit with Application to Joint Communications and SensingCode0
Leveraging User-Triggered Supervision in Contextual Bandits0
On Private and Robust Bandits0
Multiplier Bootstrap-based Exploration0
Stochastic Contextual Bandits with Long Horizon Rewards0
Randomized Greedy Learning for Non-monotone Stochastic Submodular Maximization Under Full-bandit Feedback0
Improved Algorithms for Multi-period Multi-class Packing Problems with Bandit Feedback0
Quantum contextual bandits and recommender systems for quantum data0
Show:102550
← PrevPage 46 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified