SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 181190 of 1262 papers

TitleStatusHype
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits0
Stabilizing the Kumaraswamy Distribution0
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits0
Linear Contextual Bandits with Interference0
Second Order Bounds for Contextual Bandits with Function Approximation0
Designing an Interpretable Interface for Contextual Bandits0
Causal Feature Selection Method for Contextual Multi-Armed Bandits in Recommender System0
Partially Observable Contextual Bandits with Linear Payoffs0
Batch Ensemble for Variance Dependent Regret in Stochastic Bandits0
Batched Online Contextual Sparse Bandits with Sequential Inclusion of Features0
Show:102550
← PrevPage 19 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified