SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 501550 of 1262 papers

TitleStatusHype
Exposure-Aware Recommendation using Contextual Bandits0
Variational Inference for Model-Free and Model-Based Reinforcement Learning0
Dynamic Global Sensitivity for Differentially Private Contextual Bandits0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits0
Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits0
Nonstationary Continuum-Armed Bandit Strategies for Automated Trading in a Simulated Financial MarketCode0
Raising Student Completion Rates with Adaptive Curriculum and Contextual Bandits0
Towards Soft Fairness in Restless Multi-Armed Bandits0
SPRT-based Efficient Best Arm Identification in Stochastic Bandits0
Online Learning with Off-Policy Feedback0
Parallel Best Arm Identification in Heterogeneous Environments0
Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action SpacesCode0
Contextual Bandits with Large Action Spaces: Made PracticalCode0
Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence ModelingCode1
Online SuBmodular + SuPermodular (BP) Maximization with Bandit FeedbackCode0
Model Selection in Reinforcement Learning with General Function Approximations0
Instance-optimal PAC Algorithms for Contextual Bandits0
Autonomous Drug Design with Multi-Armed Bandits0
Ranking In Generalized Linear BanditsCode0
Two-Stage Neural Contextual Bandits for Personalised News RecommendationCode0
Joint Representation Training in Sequential Tasks with Shared Structure0
Langevin Monte Carlo for Contextual BanditsCode1
Multiple-Play Stochastic Bandits with Shareable Finite-Capacity Arms0
On Private Online Convex Optimization: Optimal Algorithms in _p-Geometry and High Dimensional Contextual BanditsCode0
A Contextual Combinatorial Semi-Bandit Approach to Network Bottleneck Identification0
Combinatorial Pure Exploration of Causal Bandits0
Distributed Differential Privacy in Multi-Armed Bandits0
Squeeze All: Novel Estimator and Self-Normalized Bound for Linear Contextual Bandits0
Communication Efficient Distributed Learning for Kernelized Contextual Bandits0
Conformal Off-Policy Prediction in Contextual Bandits0
Neural Bandit with Arm Group Graph0
Efficient Resource Allocation with Fairness Constraints in Restless Multi-Armed Bandits0
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits0
A Simple and Optimal Policy Design with Safety against Heavy-Tailed Risk for Stochastic Bandits0
Group Meritocratic Fairness in Linear Contextual BanditsCode0
Robust Pareto Set Identification with Contaminated Bandit Feedback0
Asymptotic Instance-Optimal Algorithms for Interactive Decision Making0
Contextual Bandits with Knapsacks for a Conversion Model0
Provably and Practically Efficient Neural Contextual Bandits0
Provable General Function Class Representation Learning in Multitask Bandits and MDPs0
Online Meta-Learning in Adversarial Multi-Armed Bandits0
Optimistic Whittle Index Policy: Online Learning for Restless BanditsCode0
Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets0
Federated Neural BanditsCode0
Fairness and Welfare Quantification for Regret in Multi-Armed Bandits0
Meta-Learning Adversarial Bandits0
Lifting the Information Ratio: An Information-Theoretic Analysis of Thompson Sampling for Contextual Bandits0
Exploration, Exploitation, and Engagement in Multi-Armed Bandits with Abandonment0
Contextual Pandora's Box0
Show:102550
← PrevPage 11 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified