SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 301350 of 1262 papers

TitleStatusHype
A General Reduction for High-Probability Analysis with General Light-Tailed Distributions0
Catoni Contextual Bandits are Robust to Heavy-tailed Rewards0
An Optimistic Algorithm for Online Convex Optimization with Adversarial Constraints0
ADARES: Adaptive Resource Management for Virtual Machines0
AdaLinUCB: Opportunistic Learning for Contextual Bandits0
Byzantine-Resilient Decentralized Multi-Armed Bandits0
An optimal learning method for developing personalized treatment regimes0
Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits0
Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability0
An Optimal Algorithm for Multiplayer Multi-Armed Bandits0
Building Bridges: Viewing Active Learning from the Multi-Armed Bandit Lens0
Budgeted Recommendation with Delayed Feedback0
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits0
Budgeted Combinatorial Multi-Armed Bandits0
An Optimal Algorithm for Adversarial Bandits with Arbitrary Delays0
Adaptive, Robust and Scalable Bayesian Filtering for Online Learning0
Active Velocity Estimation using Light Curtains via Self-Supervised Multi-Armed Bandits0
Achieving adaptivity and optimality for multi-armed bandits using Exponential-Kullback Leibler Maillard Sampling0
Budget-Constrained Multi-Armed Bandits with Multiple Plays0
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism0
An Instrumental Value for Data Production and its Application to Data Pricing0
Breaking the T Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits0
Breaking the (1/Δ_2) Barrier: Better Batched Best Arm Identification with Adaptive Grids0
An Instance-Dependent Analysis for the Cooperative Multi-Player Multi-Armed Bandit0
Adaptive Regret for Bandits Made Possible: Two Queries Suffice0
Bounded Regret for Finitely Parameterized Multi-Armed Bandits0
Boundary Crossing Probabilities for General Exponential Families0
An Improved Relaxation for Oracle-Efficient Adversarial Contextual Bandits0
Bootstrapping Upper Confidence Bound0
An Exploration-free Method for a Linear Stochastic Bandit Driven by a Linear Gaussian Dynamical System0
Active Search for Sparse Signals with Region Sensing0
Boltzmann Exploration Done Right0
BOF-UCB: A Bayesian-Optimistic Frequentist Algorithm for Non-Stationary Contextual Bandits0
BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits0
Bi-Criteria Optimization for Combinatorial Bandits: Sublinear Regret and Constraint Violation under Bandit Feedback0
A New Benchmark for Online Learning with Budget-Balancing Constraints0
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles0
Beyond the Hazard Rate: More Perturbation Algorithms for Adversarial Multi-armed Bandits0
Better Algorithms for Stochastic Bandits with Adversarial Corruptions0
Best-of-Both-Worlds Linear Contextual Bandits0
A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal, and Parameter-free0
Adaptively Learning to Select-Rank in Online Platforms0
Active Search for High Recall: a Non-Stationary Extension of Thompson Sampling0
A Central Limit Theorem, Loss Aversion and Multi-Armed Bandits0
A Batch Sequential Halving Algorithm without Performance Degradation0
Best-of-Both-Worlds Algorithms for Linear Contextual Bandits0
An Empirical Evaluation of Thompson Sampling0
Best Arm Identification under Additive Transfer Bandits0
Best Arm Identification in Stochastic Bandits: Beyond β-optimality0
An Empirical Evaluation of Federated Contextual Bandit Algorithms0
Show:102550
← PrevPage 7 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified