SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 826850 of 1262 papers

TitleStatusHype
Reinforced Meta Active Learning0
Reinforcement Learning for Machine Learning Model Deployment: Evaluating Multi-Armed Bandits in ML Ops Environments0
Reinforcement learning techniques for Outer Loop Link Adaptation in 4G/5G systems0
Multi-Armed Bandits with Fairness Constraints for Distributing Resources to Human Teammates0
Reliability-Optimized User Admission Control for URLLC Traffic: A Neural Contextual Bandit Approach0
Remote Contextual Bandits0
Replicability is Asymptotically Free in Multi-armed Bandits0
Representation-Driven Reinforcement Learning0
Representative Arm Identification: A fixed confidence approach to identify cluster representatives0
Replicable Bandits0
Residual Bootstrap Exploration for Bandit Algorithms0
Resonance: Replacing Software Constants with Context-Aware Models in Real-time Communication0
Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism0
Resource Allocation in NOMA-based Self-Organizing Networks using Stochastic Multi-Armed Bandits0
Resourceful Contextual Bandits0
Restless Multi-Armed Bandits under Exogenous Global Markov Process0
Restless Multi-armed Bandits under Frequency and Window Constraints for Public Service Inspections0
Revisiting Simple Regret: Fast Rates for Returning a Good Arm0
Reward Biased Maximum Likelihood Estimation for Reinforcement Learning0
Reward Maximization for Pure Exploration: Minimax Optimal Good Arm Identification for Nonparametric Multi-Armed Bandits0
Reward Teaching for Federated Multi-armed Bandits0
Rising Rested Bandits: Lower Bounds and Efficient Algorithms0
Risk-Averse Multi-Armed Bandits with Unobserved Confounders: A Case Study in Emotion Regulation in Mobile Health0
Risk averse non-stationary multi-armed bandits0
Risk-Aversion in Multi-armed Bandits0
Show:102550
← PrevPage 34 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified