SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 211220 of 1262 papers

TitleStatusHype
From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox OptimizationCode0
Optimal Regret Is Achievable with Bounded Approximate Inference Error: An Enhanced Bayesian Upper Confidence Bound FrameworkCode0
Optimistic Whittle Index Policy: Online Learning for Restless BanditsCode0
Censored Semi-Bandits: A Framework for Resource Allocation with Censored FeedbackCode0
Flooding with Absorption: An Efficient Protocol for Heterogeneous Bandits over Complex NetworksCode0
Performance-Aware Self-Configurable Multi-Agent Networks: A Distributed Submodular Approach for Simultaneous Coordination and Network DesignCode0
Contextual Bandits with Smooth Regret: Efficient Learning in Continuous Action SpacesCode0
Causal Contextual Bandits with Adaptive ContextCode0
Addressing the Long-term Impact of ML Decisions via Policy RegretCode0
From Theory to Practice with RAVEN-UCB: Addressing Non-Stationarity in Multi-Armed Bandits through Variance AdaptationCode0
Show:102550
← PrevPage 22 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified