SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 12511262 of 1262 papers

TitleStatusHype
Contextual Bandits with Cross-learning0
Contextual Bandits with Knapsacks for a Conversion Model0
Contextual Bandits with Latent Confounders: An NMF Approach0
Contextual Bandits with Non-Stationary Correlated Rewards for User Association in MmWave Vehicular Networks0
Contextual Bandits with Online Neural Regression0
Contextual Bandits with Random Projection0
Contextual Bandits with Side-Observations0
Contextual Bandits with Similarity Information0
Contextual Bandits with Sparse Data in Web setting0
Contextual Bandits with Stage-wise Constraints0
Contextual bandits with surrogate losses: Margin bounds and efficient algorithms0
Contextual Bandit with Herding Effects: Algorithms and Recommendation Applications0
Show:102550
← PrevPage 26 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified