SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 451475 of 1262 papers

TitleStatusHype
Networked Restless Bandits with Positive ExternalitiesCode0
Stochastic Rising BanditsCode0
AC-Band: A Combinatorial Bandit-Based Approach to Algorithm ConfigurationCode0
On Regret-optimal Cooperative Nonstochastic Multi-armed Bandits0
Incorporating Multi-armed Bandit with Local Search for MaxSATCode0
Constrained Pure Exploration Multi-Armed Bandits with a Fixed Budget0
Contextual Decision-Making with Knapsacks Beyond the Worst Case0
Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning0
Transfer Learning for Contextual Multi-armed Bandits0
Causal Bandits: Online Decision-Making in Endogenous Settings0
Bandit Algorithms for Prophet Inequality and Pandora's Box0
Latent Bottlenecked Attentive Neural ProcessesCode0
Multi-Player Bandits Robust to Adversarial Collisions0
On Penalization in Stochastic Multi-armed Bandits0
Contextual Bandits with Packing and Covering Constraints: A Modular Lagrangian Approach via Regression0
Hypothesis Transfer in Bandits by Weighted Models0
Generalizing distribution of partial rewards for multi-armed bandits with temporally-partitioned rewards0
Thompson Sampling for High-Dimensional Sparse Linear Contextual BanditsCode0
Safe and Adaptive Decision-Making for Optimization of Safety-Critical Systems: The ARTEO AlgorithmCode0
Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms0
Adaptive Data Depth via Multi-Armed BanditsCode0
Indexability is Not Enough for Whittle: Improved, Near-Optimal Algorithms for Restless BanditsCode1
Revisiting Simple Regret: Fast Rates for Returning a Good Arm0
Robust Contextual Linear Bandits0
Conditionally Risk-Averse Contextual BanditsCode0
Show:102550
← PrevPage 19 of 51Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified