SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 681690 of 1262 papers

TitleStatusHype
Indexability and Rollout Policy for Multi-State Partially Observable Restless Bandits0
Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support0
Finite-time Analysis of Globally Nonstationary Multi-Armed BanditsCode0
From Predictions to Decisions: The Importance of Joint Predictive Distributions0
An Analysis of Reinforcement Learning for Malaria Control0
GuideBoot: Guided Bootstrap for Deep Contextual Bandits0
Inverse Contextual Bandits: Learning How Behavior Evolves over TimeCode0
Adapting to Misspecification in Contextual Bandits0
Model Selection for Generic Contextual Bandits0
Neural Contextual Bandits without RegretCode0
Show:102550
← PrevPage 69 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified