SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 12011250 of 1262 papers

TitleStatusHype
Policy Learning with Adaptively Collected DataCode0
Neural Contextual Bandits without RegretCode0
Meta-in-context learning in large language modelsCode0
Neural Contextual Bandits with UCB-based ExplorationCode0
Adaptive Experimentation with Delayed Binary FeedbackCode0
Group Meritocratic Fairness in Linear Contextual BanditsCode0
Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood MatchingCode0
Power Constrained BanditsCode0
Batched Multi-armed Bandits ProblemCode0
Harnessing the Power of Federated Learning in Federated Contextual BanditsCode0
Truncated LinUCB for Stochastic Linear BanditsCode0
Adaptive Estimator Selection for Off-Policy EvaluationCode0
Practical Bayesian Learning of Neural Networks via Adaptive Optimisation MethodsCode0
NeuroSep-CP-LCB: A Deep Learning-based Contextual Multi-armed Bandit Algorithm with Uncertainty Quantification for Early Sepsis PredictionCode0
Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and GeneralizationCode0
A Survey on Contextual Multi-armed BanditsCode0
Practical Calculation of Gittins Indices for Multi-armed BanditsCode0
Stay With Me: Lifetime Maximization Through Heteroscedastic Linear Bandits With RenegingCode0
A Field Test of Bandit Algorithms for Recommendations: Understanding the Validity of Assumptions on Human Preferences in Multi-armed BanditsCode0
Hierarchical Multi-Armed Bandits for the Concurrent Intelligent Tutoring of Concepts and Problems of Varying Difficulty LevelsCode0
Towards the D-Optimal Online Experiment Design for Recommender SelectionCode0
Distributionally Robust Policy Evaluation under General Covariate Shift in Contextual BanditsCode0
When is Off-Policy Evaluation (Reward Modeling) Useful in Contextual Bandits? A Data-Centric PerspectiveCode0
Minimum Empirical Divergence for Sub-Gaussian Linear BanditsCode0
Regret Bounds for Thompson Sampling in Episodic Restless Bandit ProblemsCode0
Mitigating Exposure Bias in Online Learning to Rank Recommendation: A Novel Reward Model for Cascading BanditsCode0
Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision ProcessesCode0
Nonparametric Gaussian Mixture Models for the Multi-Armed BanditCode0
Taming the Monster: A Fast and Simple Algorithm for Contextual BanditsCode0
Two-Stage Neural Contextual Bandits for Personalised News RecommendationCode0
Human in the Loop Adaptive Optimization for Improved Time Series ForecastingCode0
Adversarial Attacks on Combinatorial Multi-Armed BanditsCode0
Machine Teaching of Active Sequential LearnersCode0
Doubly-Robust Lasso BanditCode0
A Survey of Online Experiment Design with the Stochastic Multi-Armed BanditCode0
Thompson Sampling via Local UncertaintyCode0
Identification of the Generalized Condorcet Winner in Multi-dueling BanditsCode0
SIC-MMAB: Synchronisation Involves Communication in Multiplayer Multi-Armed BanditsCode0
Doubly Robust Policy Evaluation and LearningCode0
Dual-Mandate Patrols: Multi-Armed Bandits for Green SecurityCode0
Addressing the Long-term Impact of ML Decisions via Policy RegretCode0
Test-Time Scaling of Diffusion Models via Noise Trajectory SearchCode0
Regulating Greed Over Time in Multi-Armed BanditsCode0
Safe Exploration for Optimizing Contextual BanditsCode0
Simulated Contextual Bandits for Personalization Tasks from Recommendation DatasetsCode0
Reinforcement Learning for Physical Layer CommunicationsCode0
Simultaneously Achieving Group Exposure Fairness and Within-Group Meritocracy in Stochastic BanditsCode0
Mostly Exploration-Free Algorithms for Contextual BanditsCode0
Scalable Exploration via Ensemble++Code0
The Assistive Multi-Armed BanditCode0
Show:102550
← PrevPage 25 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified