SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 11011110 of 1262 papers

TitleStatusHype
An Optimal Algorithm for Multiplayer Multi-Armed Bandits0
An optimal learning method for developing personalized treatment regimes0
An Optimistic Algorithm for Online Convex Optimization with Adversarial Constraints0
A General Reduction for High-Probability Analysis with General Light-Tailed Distributions0
A Novel Approach to Balance Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes and its Implementation in BEACON0
A One-Size-Fits-All Solution to Conservative Bandit Problems0
Approximate Function Evaluation via Multi-Armed Bandits0
Approximately Stationary Bandits with Knapsacks0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
A Regret bound for Non-stationary Multi-Armed Bandits with Fairness Constraints0
Show:102550
← PrevPage 111 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified