SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 731740 of 1262 papers

TitleStatusHype
Open Problem: Model Selection for Contextual Bandits0
Open Problem: Tight Bounds for Kernelized Multi-Armed Bandits with Bernoulli Rewards0
Optimal Activation of Halting Multi-Armed Bandit Models0
Optimal Algorithms for Range Searching over Multi-Armed Bandits0
Optimal Algorithms for Stochastic Contextual Preference Bandits0
Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards0
Optimal and Adaptive Off-policy Evaluation in Contextual Bandits0
Optimal Best-Arm Identification under Fixed Confidence with Multiple Optima0
Optimal cross-learning for contextual bandits with unknown context distributions0
Optimal Multitask Linear Regression and Contextual Bandits under Sparse Heterogeneity0
Show:102550
← PrevPage 74 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified