SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 101150 of 1262 papers

TitleStatusHype
A Novel Approach to Balance Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes and its Implementation in BEACON0
Balans: Multi-Armed Bandits-based Adaptive Large Neighborhood Search for Mixed-Integer Programming ProblemCode1
Lagrangian Index Policy for Restless Bandits with Average Reward0
MaxInfoRL: Boosting exploration in reinforcement learning through information gain maximization0
IRL for Restless Multi-Armed Bandits with Applications in Maternal and Child HealthCode0
An Optimistic Algorithm for Online Convex Optimization with Adversarial Constraints0
UCB algorithms for multi-armed bandits: Precise regret and adaptive inference0
Conservative Contextual Bandits: Beyond Linear Representations0
Coordinated Multi-Armed Bandits for Improved Spatial Reuse in Wi-Fi0
Data Acquisition for Improving Model Fairness using Reinforcement Learning0
Selective Reviews of Bandit Problems in AI via a Statistical View0
Contextual Bandits in Payment Processing: Non-uniform Exploration and Supervised Learning at Adyen0
Achieving PAC Guarantees in Mechanism Design through Multi-Armed Bandits0
Off-policy estimation with adaptively collected data: the power of online learning0
A unifying framework for generalised Bayesian online learning in non-stationary environmentsCode1
Multi-Agent Stochastic Bandits Robust to Adversarial Corruptions0
Individual Regret in Cooperative Stochastic Multi-Armed Bandits0
Variance-Aware Linear UCB with Deep Representation for Neural Contextual BanditsCode0
Multi-armed Bandits with Missing OutcomeCode0
Structure Matters: Dynamic Policy Gradient0
Sharp Analysis for KL-Regularized Contextual Bandits and RLHF0
Rising Rested Bandits: Lower Bounds and Efficient Algorithms0
Non-Stationary Learning of Neural Networks with Automatic Soft Parameter Reset0
PageRank Bandits for Link PredictionCode0
MBExplainer: Multilevel bandit-based explanations for downstream models with augmented graph embeddings0
Minimum Empirical Divergence for Sub-Gaussian Linear BanditsCode0
FedMABA: Towards Fair Federated Learning through Multi-Armed Bandits Allocation0
Learning to Explore with Lagrangians for Bandits under Unknown Linear Constraints0
Optimal Streaming Algorithms for Multi-Armed Bandits0
Reward Maximization for Pure Exploration: Minimax Optimal Good Arm Identification for Nonparametric Multi-Armed Bandits0
Contextual Bandits with Arm Request Costs and Delays0
Online Learning for Function Placement in Serverless ComputingCode0
Is Prior-Free Black-Box Non-Stationary Reinforcement Learning Feasible?0
How Does Variance Shape the Regret in Contextual Bandits?0
Comparative Performance of Collaborative Bandit Algorithms: Effect of Sparsity and Exploration Intensity0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
EVOLvE: Evaluating and Optimizing LLMs For Exploration0
Stochastic Bandits for Egalitarian Assignment0
Diminishing Exploration: A Minimalist Approach to Piecewise Stationary Multi-Armed Bandits0
Contextual Bandits with Non-Stationary Correlated Rewards for User Association in MmWave Vehicular Networks0
DOPL: Direct Online Preference Learning for Restless Bandits with Preference Feedback0
High Probability Bound for Cross-Learning Contextual Bandits with Unknown Context Distributions0
Online Posterior Sampling with a Diffusion Prior0
Minimax-optimal trust-aware multi-armed bandits0
uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs0
On Lai's Upper Confidence Bound in Multi-Armed Bandits0
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits0
LASeR: Learning to Adaptively Select Reward Models with Multi-Armed BanditsCode1
Stabilizing the Kumaraswamy Distribution0
Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits0
Show:102550
← PrevPage 3 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified