SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 451475 of 1262 papers

TitleStatusHype
Genetic multi-armed bandits: a reinforcement learning approach for discrete optimization via simulation0
Adversarial Rewards in Universal Learning for Contextual Bandits0
Piecewise-Stationary Multi-Objective Multi-Armed Bandit with Application to Joint Communications and SensingCode0
Leveraging User-Triggered Supervision in Contextual Bandits0
On Private and Robust Bandits0
Multiplier Bootstrap-based Exploration0
Randomized Greedy Learning for Non-monotone Stochastic Submodular Maximization Under Full-bandit Feedback0
Stochastic Contextual Bandits with Long Horizon Rewards0
Improved Algorithms for Multi-period Multi-class Packing Problems with Bandit Feedback0
Quantum contextual bandits and recommender systems for quantum data0
Adversarial Attacks on Adversarial Bandits0
A Framework for Adapting Offline Algorithms to Solve Combinatorial Multi-Armed Bandit Problems with Bandit Feedback0
Contextual Causal Bayesian Optimisation0
Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits0
Banker Online Mirror Descent: A Universal Approach for Delayed Online Bandit Learning0
Quantum Heavy-tailed Bandits0
Multi-Armed Bandits and Quantum Channel Oracles0
Multi-armed Bandit Learning for TDMA Transmission Slot Scheduling and Defragmentation for Improved Bandwidth Usage0
Best Arm Identification in Stochastic Bandits: Beyond β-optimality0
Local Differential Privacy for Sequential Decision Making in a Changing Environment0
Contextual Bandits and Optimistically Universal Learning0
Online Statistical Inference for Contextual Bandits via Stochastic Gradient Descent0
On the Complexity of Representation Learning in Contextual Linear Bandits0
MABSplit: Faster Forest Training Using Multi-Armed BanditsCode0
Faster Maximum Inner Product Search in High Dimensions0
Show:102550
← PrevPage 19 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified