SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 801810 of 1262 papers

TitleStatusHype
Finding All -Good Arms in Stochastic Bandits0
A Tractable Online Learning Algorithm for the Multinomial Logit Contextual Bandit0
Resonance: Replacing Software Constants with Context-Aware Models in Real-time Communication0
Fully Gap-Dependent Bounds for Multinomial Logit Bandit0
A New Bandit Setting Balancing Information from State Evolution and Corrupted ContextCode0
Reward Biased Maximum Likelihood Estimation for Reinforcement Learning0
Metric-Free Individual Fairness with Cooperative Contextual Bandits0
Improving Offline Contextual Bandits with Distributional Robustness0
Active Reinforcement Learning: Observing Rewards at a Cost0
Asymptotic Convergence of Thompson Sampling0
Show:102550
← PrevPage 81 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified