SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 401410 of 1262 papers

TitleStatusHype
Federated Linear Contextual Bandits with User-level Differential Privacy0
Tight Regret Bounds for Single-pass Streaming Multi-armed BanditsCode0
Differentially Private Episodic Reinforcement Learning with Heavy-tailed Rewards0
Representation-Driven Reinforcement Learning0
Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits0
Contextual Bandits with Budgeted Information Reveal0
Small Total-Cost Constraints in Contextual Bandits with Knapsacks, with Application to Fairness0
Meta-in-context learning in large language modelsCode0
Sequential Best-Arm Identification with Application to Brain-Computer Interface0
Efficient Training of Multi-task Combinarotial Neural Solver with Multi-armed Bandits0
Show:102550
← PrevPage 41 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified