SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 9911000 of 1262 papers

TitleStatusHype
TS-UCB: Improving on Thompson Sampling With Little to No Additional Computation0
UCB algorithms for multi-armed bandits: Precise regret and adaptive inference0
Understanding Memory-Regret Trade-Off for Streaming Stochastic Multi-Armed Bandits0
Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits0
Unifying Clustered and Non-stationary Bandits0
uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs0
Unimodal Bandits: Regret Lower Bounds and Optimal Algorithms0
Universal and data-adaptive algorithms for model selection in linear contextual bandits0
Unreliable Multi-Armed Bandits: A Novel Approach to Recommendation Systems0
Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed Bandits0
Show:102550
← PrevPage 100 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified