SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 7180 of 1262 papers

TitleStatusHype
Active Inference for Autonomous Decision-Making with Contextual Multi-Armed Bandits0
Adaptive Exploration in Linear Contextual Bandit0
Accurate and Fast Federated Learning via Combinatorial Multi-Armed Bandits0
An Analysis of the Value of Information when Exploring Stochastic, Discrete Multi-Armed Bandits0
A Bandit Approach to Sequential Experimental Design with False Discovery Control0
Access Probability Optimization in RACH: A Multi-Armed Bandits Approach0
An Adaptive Method for Contextual Stochastic Multi-armed Bandits with Rewards Generated by a Linear Dynamical System0
Adaptive Endpointing with Deep Contextual Multi-armed Bandits0
Algorithms with Logarithmic or Sublinear Regret for Constrained Contextual Bandits0
Adaptive Discretization against an Adversary: Lipschitz bandits, Dynamic Pricing, and Auction Tuning0
Show:102550
← PrevPage 8 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified