SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 131140 of 1262 papers

TitleStatusHype
A Hybrid Meta-Learning and Multi-Armed Bandit Approach for Context-Specific Multi-Objective Recommendation Optimization0
Adaptive Data Augmentation for Thompson Sampling0
A Survey on Practical Applications of Multi-Armed and Contextual Bandits0
A Hierarchical Nearest Neighbour Approach to Contextual Bandits0
A General Theory of the Stochastic Linear Bandit and Its Applications0
Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems0
A General Framework for Off-Policy Learning with Partially-Observed Reward0
A General Framework for Bandit Problems Beyond Cumulative Objectives0
Adaptive Budgeted Multi-Armed Bandits for IoT with Dynamic Resource Constraints0
A Contextual Combinatorial Semi-Bandit Approach to Network Bottleneck Identification0
Show:102550
← PrevPage 14 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified