SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 591600 of 1262 papers

TitleStatusHype
Thompson Sampling for Bandit Learning in Matching MarketsCode0
Worst-case Performance of Greedy Policies in Bandits with Imperfect Context Observations0
Stochastic Multi-armed Bandits with Non-stationary Rewards Generated by a Linear Dynamical System0
Strategies for Safe Multi-Armed Bandits with Logarithmic Regret and Risk0
Flexible and Efficient Contextual Bandits with Heterogeneous Treatment Effect Oracles0
Best Arm Identification in Restless Markov Multi-Armed Bandits0
On Kernelized Multi-Armed Bandits with Constraints0
Modeling Attrition in Recommender Systems with Departing Bandits0
Multi-armed bandits for resource efficient, online optimization of language model pre-training: the use case of dynamic maskingCode0
Efficient Algorithms for Extreme BanditsCode0
Show:102550
← PrevPage 60 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified