SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 11511200 of 1262 papers

TitleStatusHype
Banker Online Mirror Descent: A Universal Approach for Delayed Online Bandit Learning0
Batched Bandits with Crowd Externalities0
Batched Coarse Ranking in Multi-Armed Bandits0
Regret Bounds for Batched Bandits0
Batched Nonparametric Bandits via k-Nearest Neighbor UCB0
Batched Nonparametric Contextual Bandits0
Batched Online Contextual Sparse Bandits with Sequential Inclusion of Features0
Batched Thompson Sampling0
Batched Thompson Sampling for Multi-Armed Bandits0
Batch Ensemble for Variance Dependent Regret in Stochastic Bandits0
Towards Bayesian Data Selection0
Bayesian decision-making under misspecified priors with applications to meta-learning0
BEACON: Balancing Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes0
Beam Learning -- Using Machine Learning for Finding Beam Directions0
Be Greedy in Multi-Armed Bandits0
Efficient Prompt Optimization Through the Lens of Best Arm Identification0
Quantile Multi-Armed Bandits: Optimal Best-Arm Identification and a Differentially Private Scheme0
Best-Arm Identification in Correlated Multi-Armed Bandits0
Best Arm Identification in Linked Bandits0
Best arm identification in multi-armed bandits with delayed feedback0
Best Arm Identification in Restless Markov Multi-Armed Bandits0
Best Arm Identification in Stochastic Bandits: Beyond β-optimality0
Best Arm Identification under Additive Transfer Bandits0
Best-of-Both-Worlds Algorithms for Linear Contextual Bandits0
Best-of-Both-Worlds Linear Contextual Bandits0
Better Algorithms for Stochastic Bandits with Adversarial Corruptions0
Beyond the Hazard Rate: More Perturbation Algorithms for Adversarial Multi-armed Bandits0
Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles0
Bi-Criteria Optimization for Combinatorial Bandits: Sublinear Regret and Constraint Violation under Bandit Feedback0
BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits0
BOF-UCB: A Bayesian-Optimistic Frequentist Algorithm for Non-Stationary Contextual Bandits0
Boltzmann Exploration Done Right0
Bootstrapping Upper Confidence Bound0
Boundary Crossing Probabilities for General Exponential Families0
Bounded Regret for Finitely Parameterized Multi-Armed Bandits0
Breaking the (1/Δ_2) Barrier: Better Batched Best Arm Identification with Adaptive Grids0
Breaking the T Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits0
Bridging Offline Reinforcement Learning and Imitation Learning: A Tale of Pessimism0
Budget-Constrained Multi-Armed Bandits with Multiple Plays0
Budgeted Combinatorial Multi-Armed Bandits0
Budgeted Recommendation with Delayed Feedback0
Building Bridges: Viewing Active Learning from the Multi-Armed Bandit Lens0
Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits under Realizability0
Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits0
Byzantine-Resilient Decentralized Multi-Armed Bandits0
Catoni Contextual Bandits are Robust to Heavy-tailed Rewards0
Causal Bandits: Online Decision-Making in Endogenous Settings0
Causal Contextual Bandits with Targeted Interventions0
Causal Feature Selection Method for Contextual Multi-Armed Bandits in Recommender System0
Censored Semi-Bandits for Resource Allocation0
Show:102550
← PrevPage 24 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified