SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 801850 of 1262 papers

TitleStatusHype
Asymptotic Convergence of Thompson Sampling0
Towards Fundamental Limits of Multi-armed Bandits with Random Walk Feedback0
Multi-armed Bandits with Cost Subsidy0
Multi-Armed Bandits with Censored Consumption of Resources0
On No-Sensing Adversarial Multi-player Multi-armed Bandits with Collision Communications0
Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism0
Learning to Actively Learn: A Robust Approach0
Tractable contextual bandits beyond realizability0
Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards0
Online Semi-Supervised Learning with Bandit Feedback0
Online Algorithm for Unsupervised Sequential Selection with Contextual Information0
Achieving User-Side Fairness in Contextual Bandits0
Quantile Bandits for Best Arms IdentificationCode0
DBA bandits: Self-driving index tuning under ad-hoc, analytical workloads with safety guarantees0
Stochastic Bandits with Vector Losses: Minimizing ^-Norm of Relative Losses0
Asymptotic Randomised Control with applications to bandits0
Multi-Armed Bandits with Dependent Arms0
Adapting to Delays and Data in Adversarial Multi-Armed Bandits0
Online and Distribution-Free Robustness: Regression and Contextual Bandits with Huber Contamination0
Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective0
CorrAttack: Black-box Adversarial Attack with Structured Search0
Neural Thompson SamplingCode1
Is Reinforcement Learning More Difficult Than Bandits? A Near-optimal Algorithm Escaping the Curse of Horizon0
Contextual Bandits for adapting to changing User preferences over time0
Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms0
Online Semi-Supervised Learning in Contextual Bandits with Episodic RewardCode0
Thompson Sampling for Unsupervised Sequential Selection0
Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users' Feedback0
Deep Contextual Bandits for Fast Initial Access in mmWave Based User-Centric Ultra-Dense Networks0
Dual-Mandate Patrols: Multi-Armed Bandits for Green SecurityCode0
Carousel Personalization in Music Streaming Apps with Contextual BanditsCode1
VacSIM: Learning Effective Strategies for COVID-19 Vaccine Distribution using Reinforcement LearningCode0
Unifying Clustered and Non-stationary Bandits0
Statistically Robust, Risk-Averse Best Arm Identification in Multi-Armed Bandits0
Dynamic Batch Learning in High-Dimensional Sparse Linear Contextual Bandits0
A Sleeping, Recovering Bandit Algorithm for Optimizing Recurring Notifications0
Contextual Bandits for Advertising Budget Allocation0
Offline Contextual Multi-armed Bandits for Mobile Health Interventions: A Case Study on Emotion Regulation0
Using Subjective Logic to Estimate Uncertainty in Multi-Armed Bandit ProblemsCode0
Kernel Methods for Cooperative Multi-Agent Contextual Bandits0
Lenient Regret for Multi-Armed Bandits0
A framework for optimizing COVID-19 testing policy using a Multi Armed Bandit approach0
Greedy Bandits with Sampled Context0
Multi-Armed Bandits for Minesweeper: Profiting from Exploration-Exploitation Synergy0
Minimax Policy for Heavy-tailed Bandits0
Competing Bandits: The Perils of Exploration Under Competition0
Self-Tuning Bandits over Unknown Covariate-Shifts0
Upper Counterfactual Confidence Bounds: a New Optimism Principle for Contextual Bandits0
Optimal Learning for Structured BanditsCode0
Quantum exploration algorithms for multi-armed banditsCode0
Show:102550
← PrevPage 17 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified