SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 221230 of 1262 papers

TitleStatusHype
Causally Abstracted Multi-armed BanditsCode0
Censored Semi-Bandits: A Framework for Resource Allocation with Censored FeedbackCode0
Contextual Linear Bandits under Noisy Features: Towards Bayesian OraclesCode0
Quantum Natural Policy Gradients: Towards Sample-Efficient Reinforcement LearningCode0
Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual BanditsCode0
Regret Bounds for Thompson Sampling in Episodic Restless Bandit ProblemsCode0
Relational Boosted BanditsCode0
Residual Loss Prediction: Reinforcement Learning With No Incremental FeedbackCode0
Safe Exploration for Optimizing Contextual BanditsCode0
Doubly-Robust Lasso BanditCode0
Show:102550
← PrevPage 23 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified