SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 10011025 of 1262 papers

TitleStatusHype
PAC Reinforcement Learning with Rich Observations0
Pairwise Elimination with Instance-Dependent Guarantees for Bandits with Cost Subsidy0
Parallel Contextual Bandits in Wireless Handover Optimization0
Parallelizing Contextual Bandits0
Parameterized Exploration0
Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users' Feedback0
Partially Observable Contextual Bandits with Linear Payoffs0
Personalization Paradox in Behavior Change Apps: Lessons from a Social Comparison-Based Personalized App for Physical Activity0
Personalized Course Sequence Recommendations0
Perturbed-History Exploration in Stochastic Multi-Armed Bandits0
Pessimism for Offline Linear Contextual Bandits using _p Confidence Sets0
PG-TS: Improved Thompson Sampling for Logistic Contextual Bandits0
Phasic Diversity Optimization for Population-Based Reinforcement Learning0
Non-Stationary Off-Policy Optimization0
Player Modeling via Multi-Armed Bandits0
Policy Gradients for Contextual Recommendations0
Practical Algorithms for Best-K Identification in Multi-Armed Bandits0
Practical Contextual Bandits with Regression Oracles0
Preference-based Online Learning with Dueling Bandits: A Survey0
Preference-centric Bandits: Optimality of Mixtures and Regret-efficient Algorithms0
Privacy Amplification via Shuffling for Linear Contextual Bandits0
Privacy-Preserving Communication-Efficient Federated Multi-Armed Bandits0
Privacy-Preserving Multi-Party Contextual Bandits0
Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs0
Productization Challenges of Contextual Multi-Armed Bandits0
Show:102550
← PrevPage 41 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified