SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 11011150 of 1262 papers

TitleStatusHype
An Optimal Algorithm for Multiplayer Multi-Armed Bandits0
An optimal learning method for developing personalized treatment regimes0
An Optimistic Algorithm for Online Convex Optimization with Adversarial Constraints0
A General Reduction for High-Probability Analysis with General Light-Tailed Distributions0
A Novel Approach to Balance Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes and its Implementation in BEACON0
A One-Size-Fits-All Solution to Conservative Bandit Problems0
Approximate Function Evaluation via Multi-Armed Bandits0
Approximately Stationary Bandits with Knapsacks0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
A Regret bound for Non-stationary Multi-Armed Bandits with Fairness Constraints0
A Reinforcement-Learning-Enhanced LLM Framework for Automated A/B Testing in Personalized Marketing0
A Risk-Averse Framework for Non-Stationary Stochastic Multi-Armed Bandits0
A Simple and Optimal Policy Design with Safety against Heavy-Tailed Risk for Stochastic Bandits0
A Sleeping, Recovering Bandit Algorithm for Optimizing Recurring Notifications0
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity0
A Survey of Risk-Aware Multi-Armed Bandits0
Asymptotically Best Causal Effect Identification with Multi-Armed Bandits0
Asymptotically Optimal Regret for Black-Box Predict-then-Optimize0
The Choice of Noninformative Priors for Thompson Sampling in Multiparameter Bandit Models0
Asymptotically Unbiased Off-Policy Policy Evaluation when Reusing Old Data in Nonstationary Environments0
Asymptotic Convergence of Thompson Sampling0
Asymptotic Instance-Optimal Algorithms for Interactive Decision Making0
Asymptotic Performance of Thompson Sampling in the Batched Multi-Armed Bandits0
Asymptotic Randomised Control with applications to bandits0
Augmenting Online RL with Offline Data is All You Need: A Unified Hybrid RL Algorithm Design and Analysis0
A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning0
Automatic Ensemble Learning for Online Influence Maximization0
AutoML for Contextual Bandits0
Autonomous Drug Design with Multi-Armed Bandits0
Balanced Linear Contextual Bandits0
Balanced off-policy evaluation in general action spaces0
Balancing Act: Prioritization Strategies for LLM-Designed Restless Bandit Rewards0
Ballooning Multi-Armed Bandits0
Bandit Algorithms for Prophet Inequality and Pandora's Box0
Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits0
BanditMF: Multi-Armed Bandit Based Matrix Factorization Recommender System0
BanditQ: Fair Bandits with Guaranteed Rewards0
BanditRank: Learning to Rank Using Contextual Bandits0
Bandit Regret Scaling with the Effective Loss Range0
Bandits Don't Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits0
Bandits Don’t Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits0
Bandits for Learning to Explain from Explanations0
Bandits meet Computer Architecture: Designing a Smartly-allocated Cache0
Bandit Social Learning: Exploration under Myopic Behavior0
Bandits Warm-up Cold Recommender Systems0
Preferences Evolve And So Should Your Bandits: Bandits with Evolving States for Online Platforms0
Bandits with Knapsacks beyond the Worst Case0
Bandits with Partially Observable Confounded Data0
Bandits with Temporal Stochastic Constraints0
Banker Online Mirror Descent0
Show:102550
← PrevPage 23 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified