SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 701750 of 1262 papers

TitleStatusHype
Reinforcement Learning for Physical Layer CommunicationsCode0
BanditMF: Multi-Armed Bandit Based Matrix Factorization Recommender System0
Smooth Sequential Optimisation with Delayed Feedback0
Banker Online Mirror Descent0
Guaranteed Fixed-Confidence Best Arm Identification in Multi-Armed Bandits: Simple Sequential Elimination Algorithms0
Towards Costless Model Selection in Contextual Bandits: A Bias-Variance Perspective0
A Central Limit Theorem, Loss Aversion and Multi-Armed Bandits0
Fixed-Budget Best-Arm Identification in Structured Bandits0
Scale Free Adversarial Multi Armed Bandits0
Cooperative Stochastic Multi-agent Multi-armed Bandits Robust to Adversarial Corruptions0
Generalized Linear Bandits with Local Differential PrivacyCode1
On Learning to Rank Long Sequences with Contextual Bandits0
Multi-facet Contextual Bandits: A Neural Network PerspectiveCode0
Differentially Private Multi-Armed Bandits in the Shuffle Model0
Robust Stochastic Linear Contextual Bandits Under Adversarial Attacks0
Fair Exploration via Axiomatic Bargaining0
Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits0
Stochastic Multi-Armed Bandits with Unrestricted Delay Distributions0
Off-Policy Evaluation via Adaptive Weighting with Data from Contextual BanditsCode1
Addressing the Long-term Impact of ML Decisions via Policy RegretCode0
Invariant Policy Learning: A Causal PerspectiveCode0
Recurrent Submodular Welfare and Matroid Blocking Semi-Bandits0
Parallelizing Contextual Bandits0
Diffusion Approximations for Thompson Sampling0
Deep Bandits Show-Off: Simple and Efficient Exploration with Deep NetworksCode1
Combinatorial Multi-armed Bandits for Resource AllocationCode0
Stochastic Multi-Armed Bandits with Control Variates0
Contextual Bandits with Sparse Data in Web setting0
Policy Learning with Adaptively Collected DataCode0
Optimal Algorithms for Range Searching over Multi-Armed Bandits0
Statistical Inference with M-Estimators on Adaptively Collected Data0
Online certification of preference-based fairness for personalized recommender systems0
Off-Policy Risk Assessment in Contextual Bandits0
Censored Semi-Bandits for Resource Allocation0
An Efficient Algorithm for Deep Stochastic Contextual Bandits0
Leveraging Good Representations in Linear Contextual Bandits0
Multinomial Logit Contextual Bandits: Provable Optimality and Practicality0
Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information0
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism0
Deep Contextual Bandits for Fast Neighbor-Aided Initial Access in mmWave Cell-Free Networks0
Encrypted Linear Contextual Bandit0
Nearest Neighbor Search Under Uncertainty0
Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems0
Selective Intervention Planning using Restless Multi-Armed Bandits to Improve Maternal and Child Health Outcomes0
Fairness of Exposure in Stochastic Bandits0
Local Clustering in Contextual Multi-Armed Bandits0
Adapting to Misspecification in Contextual Bandits with Offline Regression Oracles0
Online Multi-Armed Bandits with Adaptive Inference0
Combinatorial Bandits under Strategic ManipulationsCode0
Federated Multi-armed Bandits with PersonalizationCode0
Show:102550
← PrevPage 15 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified