SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 6170 of 1262 papers

TitleStatusHype
Multi-agent Multi-armed Bandits with Minimum Reward Guarantee FairnessCode0
Achieving adaptivity and optimality for multi-armed bandits using Exponential-Kullback Leibler Maillard Sampling0
Efficient and Optimal Policy Gradient Algorithm for Corrupted Multi-armed Bandits0
Continuous K-Max Bandits0
Contextual Linear Bandits with Delay as Payoff0
Model selection for behavioral learning data and applications to contextual bandits0
Near-Optimal Private Learning in Linear Contextual Bandits0
Improved Offline Contextual Bandits with Second-Order Bounds: Betting and Freezing0
Contextual bandits with entropy-based human feedbackCode0
Heterogeneous Multi-agent Multi-armed Bandits on Stochastic Block Models0
Show:102550
← PrevPage 7 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified