SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 751760 of 1262 papers

TitleStatusHype
Output-Weighted Sampling for Multi-Armed Bandits with Extreme PayoffsCode0
Top-k eXtreme Contextual Bandits with Arm HierarchyCode0
Meta-Thompson Sampling0
Multi-Agent Multi-Armed Bandits with Limited Communication0
Non-stationary Reinforcement Learning without Prior Knowledge: An Optimal Black-box Approach0
Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits0
Player Modeling via Multi-Armed Bandits0
Fine-Grained Gap-Dependent Bounds for Tabular MDPs via Adaptive Multi-Step Bootstrap0
Bandits for Learning to Explain from Explanations0
Online Limited Memory Neural-Linear Bandits with Likelihood MatchingCode0
Show:102550
← PrevPage 76 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified