SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 226250 of 1262 papers

TitleStatusHype
COBRA: Contextual Bandit Algorithm for Ensuring Truthful Strategic Agents0
Parallel Best Arm Identification in Heterogeneous Environments0
Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits0
Collaborative Min-Max Regret in Grouped Multi-Armed Bandits0
Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits0
Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits0
Adversarial Attacks on Adversarial Bandits0
Top-k Combinatorial Bandits with Full-Bandit Feedback0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
A Regret bound for Non-stationary Multi-Armed Bandits with Fairness Constraints0
Combinatorial Multi-armed Bandits for Real-Time Strategy Games0
Combinatorial Multi-Armed Bandits with Filtered Feedback0
Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond0
Combinatorial Network Optimization with Unknown Variables: Multi-Armed Bandits with Linear Rewards0
Combinatorial Pure Exploration of Multi-Armed Bandits0
Combinatorial Pure Exploration with Full-bandit Feedback and Beyond: Solving Combinatorial Optimization under Uncertainty with Limited Observation0
Combinatorial Semi-Bandits with Knapsacks0
Combining Difficulty Ranking with Multi-Armed Bandits to Sequence Educational Content0
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity0
Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support0
Adversarial Bandits with Knapsacks0
Communication Efficient Distributed Learning for Kernelized Contextual Bandits0
Comparative Performance of Collaborative Bandit Algorithms: Effect of Sparsity and Exploration Intensity0
A framework for optimizing COVID-19 testing policy using a Multi Armed Bandit approach0
Show:102550
← PrevPage 10 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified