SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 126150 of 1262 papers

TitleStatusHype
Minimum Empirical Divergence for Sub-Gaussian Linear BanditsCode0
FedMABA: Towards Fair Federated Learning through Multi-Armed Bandits Allocation0
Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints0
Optimal Streaming Algorithms for Multi-Armed Bandits0
Reward Maximization for Pure Exploration: Minimax Optimal Good Arm Identification for Nonparametric Multi-Armed Bandits0
Contextual Bandits with Arm Request Costs and Delays0
Online Learning for Function Placement in Serverless ComputingCode0
Is Prior-Free Black-Box Non-Stationary Reinforcement Learning Feasible?0
How Does Variance Shape the Regret in Contextual Bandits?0
Comparative Performance of Collaborative Bandit Algorithms: Effect of Sparsity and Exploration Intensity0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
EVOLvE: Evaluating and Optimizing LLMs For Exploration0
Stochastic Bandits for Egalitarian Assignment0
Diminishing Exploration: A Minimalist Approach to Piecewise Stationary Multi-Armed Bandits0
Contextual Bandits with Non-Stationary Correlated Rewards for User Association in MmWave Vehicular Networks0
DOPL: Direct Online Preference Learning for Restless Bandits with Preference Feedback0
High Probability Bound for Cross-Learning Contextual Bandits with Unknown Context Distributions0
Online Posterior Sampling with a Diffusion Prior0
Minimax-optimal trust-aware multi-armed bandits0
uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs0
On Lai's Upper Confidence Bound in Multi-Armed Bandits0
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits0
LASeR: Learning to Adaptively Select Reward Models with Multi-Armed BanditsCode1
Stabilizing the Kumaraswamy Distribution0
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits0
Show:102550
← PrevPage 6 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified