SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 761770 of 1262 papers

TitleStatusHype
Parameterized Exploration0
Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users' Feedback0
Partially Observable Contextual Bandits with Linear Payoffs0
Personalization Paradox in Behavior Change Apps: Lessons from a Social Comparison-Based Personalized App for Physical Activity0
Personalized Course Sequence Recommendations0
Perturbed-History Exploration in Stochastic Multi-Armed Bandits0
Pessimism for Offline Linear Contextual Bandits using _p Confidence Sets0
PG-TS: Improved Thompson Sampling for Logistic Contextual Bandits0
Phasic Diversity Optimization for Population-Based Reinforcement Learning0
Non-Stationary Off-Policy Optimization0
Show:102550
← PrevPage 77 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified