SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 201210 of 1262 papers

TitleStatusHype
Balancing Act: Prioritization Strategies for LLM-Designed Restless Bandit Rewards0
Multi-agent Multi-armed Bandits with Stochastic Sharable Arm Capacities0
GINO-Q: Learning an Asymptotically Optimal Index Policy for Restless Multi-armed Bandits0
Contextual Bandits for Unbounded Context Distributions0
Reciprocal Learning0
Hierarchical Multi-Armed Bandits for the Concurrent Intelligent Tutoring of Concepts and Problems of Varying Difficulty LevelsCode0
Mitigating Exposure Bias in Online Learning to Rank Recommendation: A Novel Reward Model for Cascading BanditsCode0
Combining Diverse Information for Coordinated Action: Stochastic Bandit Algorithms for Heterogeneous AgentsCode0
Empathic Responding for Digital Interpersonal Emotion Regulation via Content Recommendation0
Online Learning for Autonomous Management of Intent-based 6G Networks0
Show:102550
← PrevPage 21 of 127Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified