SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 641650 of 1262 papers

TitleStatusHype
Decentralized Cooperative Reinforcement Learning with Hierarchical Information Structure0
(Almost) Free Incentivized Exploration from Decentralized Learning AgentsCode0
Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and GeneralizationCode0
Federated Linear Contextual Bandits0
The Pareto Frontier of model selection for general Contextual Bandits0
Linear Contextual Bandits with Adversarial Corruptions0
Analysis of Thompson Sampling for Partially Observable Contextual Multi-Armed Bandits0
Towards the D-Optimal Online Experiment Design for Recommender SelectionCode0
Dynamic pricing and assortment under a contextual MNL demand0
Stateful Offline Contextual Policy Evaluation and Learning0
Show:102550
← PrevPage 65 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified