SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 10011050 of 1262 papers

TitleStatusHype
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement LearningCode0
Empirical Likelihood for Contextual BanditsCode0
Distribution oblivious, risk-aware algorithms for multi-armed bandits with unbounded rewardsCode0
Model selection for contextual banditsCode0
Rarely-switching linear bandits: optimization of causal effects for the real world0
Multi-Objective Generalized Linear Bandits0
Distribution-dependent and Time-uniform Bounds for Piecewise i.i.d Bandits0
Equipping Experts/Bandits with Long-term Memory0
Regret Bounds for Thompson Sampling in Episodic Restless Bandit ProblemsCode0
Differential Privacy for Multi-armed Bandits: What Is It and What Is Its Cost?0
Top-k Combinatorial Bandits with Full-Bandit Feedback0
Achieving Fairness in Stochastic Multi-armed Bandit Problem0
Are sample means in multi-armed bandits positively or negatively biased?0
OSOM: A simultaneously optimal algorithm for multi-armed and linear contextual bandits0
Data Poisoning Attacks on Stochastic Bandits0
Lessons from Contextual Bandit Learning in a Customer Support Bot0
Tight Regret Bounds for Infinite-armed Linear Contextual Bandits0
Meta-learners' learning dynamics are unlike learners'0
Non-Stochastic Multi-Player Multi-Armed Bandits: Optimal Rate With Collision Information, Sublinear Without0
Constrained Restless Bandits for Dynamic Scheduling in Cyber-Physical Systems0
Introduction to Multi-Armed BanditsCode0
Distributed Bandit Learning: Near-Optimal Regret with Efficient Communication0
Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits0
Batched Multi-armed Bandits ProblemCode0
A Survey on Practical Applications of Multi-Armed and Contextual Bandits0
Nearly Minimax-Optimal Regret for Linearly Parameterized Bandits0
Meta-Learning surrogate models for sequential decision making0
Contextual Bandits with Random Projection0
From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox OptimizationCode0
Perturbed-History Exploration in Stochastic Multi-Armed Bandits0
Better Algorithms for Stochastic Bandits with Adversarial Corruptions0
AdaLinUCB: Opportunistic Learning for Contextual Bandits0
Equal Opportunity in Online Classification with Partial FeedbackCode0
Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting0
A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free0
Randomized Allocation with Nonparametric Estimation for Contextual Multi-Armed Bandits with Delayed Rewards0
On the bias, risk and consistency of sample means in multi-armed bandits0
Target Tracking for Contextual Bandits: Application to Demand Side Management0
Almost Boltzmann Exploration0
The Assistive Multi-Armed BanditCode0
PAC Identification of Many Good Arms in Stochastic Multi-Armed Bandits0
Regret Minimisation in Multi-Armed Bandits Using Bounded Arm Memory0
Deep Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood Matching0
Parallel Contextual Bandits in Wireless Handover Optimization0
Imitation-Regularized Offline Learning0
Concentration bounds for CVaR estimation: The cases of light-tailed and heavy-tailed distributions0
Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit FeedbackCode0
Multi-player Multi-armed Bandits for Stable Allocation in Heterogeneous Ad-Hoc Networks0
Human-AI Learning Performance in Multi-Armed Bandits0
Generalizable Meta-Heuristic based on Temporal Estimation of Rewards for Large Scale Blackbox Optimization0
Show:102550
← PrevPage 21 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified