SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 521530 of 1262 papers

TitleStatusHype
Risk-Averse Multi-Armed Bandits with Unobserved Confounders: A Case Study in Emotion Regulation in Mobile Health0
When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits0
Multi-Armed Bandits with Self-Information Rewards0
Exposure-Aware Recommendation using Contextual Bandits0
Variational Inference for Model-Free and Model-Based Reinforcement Learning0
Dynamic Global Sensitivity for Differentially Private Contextual Bandits0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
Understanding the stochastic dynamics of sequential decision-making processes: A path-integral analysis of multi-armed bandits0
Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits0
Nonstationary Continuum-Armed Bandit Strategies for Automated Trading in a Simulated Financial MarketCode0
Show:102550
← PrevPage 53 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified