SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 11411150 of 1262 papers

TitleStatusHype
Variational inference for the multi-armed contextual banditCode0
Ease.ml: Towards Multi-tenant Resource Sharing for Machine Learning Workloads0
Efficient Contextual Bandits in Non-stationary Worlds0
Reinforcement learning techniques for Outer Loop Link Adaptation in 4G/5G systems0
Safety-Aware Algorithms for Adversarial Contextual Bandit0
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity0
Nonlinear Sequential Accepts and Rejects for Identification of Top Arms in Stochastic Bandits0
Efficient Reinforcement Learning via Initial Pure Exploration0
Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration0
Boltzmann Exploration Done Right0
Show:102550
← PrevPage 115 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified