SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 551575 of 1262 papers

TitleStatusHype
Information-Directed Selection for Top-Two AlgorithmsCode0
Neural Contextual Bandits Based Dynamic Sensor Selection for Low-Power Body-Area Networks0
Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs0
Falsification of Multiple Requirements for Cyber-Physical Systems Using Online Generative Adversarial Networks and Multi-Armed Bandits0
Contextual Information-Directed Sampling0
Pessimism for Offline Linear Contextual Bandits using _p Confidence Sets0
SplitPlace: AI Augmented Splitting and Placement of Large-Scale Neural Networks in Mobile Edge EnvironmentsCode1
Stability Enforced Bandit Algorithms for Channel Selection in Remote State Estimation of Gauss-Markov Processes0
Breaking the T Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits0
Multi-Armed Bandits in Brain-Computer InterfacesCode0
Slowly Changing Adversarial Bandit Algorithms are Efficient for Discounted MDPs0
Semi-Parametric Contextual Bandits with Graph-Laplacian Regularization0
From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses0
Nearly Optimal Algorithms for Linear Contextual Bandits with Adversarial Corruptions0
A Survey of Risk-Aware Multi-Armed Bandits0
Selectively Contextual Bandits0
Federated Multi-Armed Bandits Under Byzantine Attacks0
Pervasive Machine Learning for Smart Radio Environments Enabled by Reconfigurable Intelligent SurfacesCode1
Multi-Player Multi-Armed Bandits with Finite Shareable Resources Arms: Learning Algorithms & Applications0
Evolutionary Multi-Armed Bandits with Genetic Thompson SamplingCode0
Rate-Constrained Remote Contextual Bandits0
Thompson Sampling for Bandit Learning in Matching MarketsCode0
Worst-case Performance of Greedy Policies in Bandits with Imperfect Context Observations0
Stochastic Multi-armed Bandits with Non-stationary Rewards Generated by a Linear Dynamical System0
Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk0
Show:102550
← PrevPage 23 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified