SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 251300 of 1262 papers

TitleStatusHype
Communication Efficient Distributed Learning for Kernelized Contextual Bandits0
Adversarial Bandits with Knapsacks0
Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs0
Concurrent Decentralized Channel Allocation and Access Point Selection using Multi-Armed Bandits in multi BSS WLANs0
Adapting to Delays and Data in Adversarial Multi-Armed Bandits0
Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support0
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity0
Combining Difficulty Ranking with Multi-Armed Bandits to Sequence Educational Content0
Combinatorial Semi-Bandits with Knapsacks0
A Sleeping, Recovering Bandit Algorithm for Optimizing Recurring Notifications0
Adversarial Attacks on Linear Contextual Bandits0
Combinatorial Pure Exploration with Full-bandit Feedback and Beyond: Solving Combinatorial Optimization under Uncertainty with Limited Observation0
Combinatorial Pure Exploration of Multi-Armed Bandits0
A Simple and Optimal Policy Design with Safety against Heavy-Tailed Risk for Stochastic Bandits0
Combinatorial Network Optimization with Unknown Variables: Multi-Armed Bandits with Linear Rewards0
Combinatorial Multivariant Multi-Armed Bandits with Applications to Episodic Reinforcement Learning and Beyond0
A Risk-Averse Framework for Non-Stationary Stochastic Multi-Armed Bandits0
Adversarial Attacks on Cooperative Multi-agent Bandits0
A Classification View on Meta Learning Bandits0
Combinatorial Multi-Armed Bandits with Filtered Feedback0
Combinatorial Multi-armed Bandits for Real-Time Strategy Games0
A Reinforcement-Learning-Enhanced LLM Framework for Automated A/B Testing in Personalized Marketing0
Combinatorial Multi-armed Bandits: Arm Selection via Group Testing0
A Regret bound for Non-stationary Multi-Armed Bandits with Fairness Constraints0
Bayesian Analysis of Combinatorial Gaussian Process Bandits0
Top-k Combinatorial Bandits with Full-Bandit Feedback0
A Provably Efficient Model-Free Posterior Sampling Method for Episodic Reinforcement Learning0
Communication-Efficient Collaborative Regret Minimization in Multi-Armed Bandits0
Adversarial Attacks on Adversarial Bandits0
Adapting Bandit Algorithms for Settings with Sequentially Available Arms0
Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits0
Collaborative Min-Max Regret in Grouped Multi-Armed Bandits0
Approximately Stationary Bandits with Knapsacks0
Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits0
Parallel Best Arm Identification in Heterogeneous Environments0
Approximate Function Evaluation via Multi-Armed Bandits0
Bandits with Knapsacks beyond the Worst-Case0
COBRA: Contextual Bandit Algorithm for Ensuring Truthful Strategic Agents0
Clustered Linear Contextual Bandits with Knapsacks0
A One-Size-Fits-All Solution to Conservative Bandit Problems0
Classical Bandit Algorithms for Entanglement Detection in Parameterized Qubit States0
Censored Semi-Bandits for Resource Allocation0
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health0
AdaptEx: A Self-Service Contextual Bandit Platform0
Achieving User-Side Fairness in Contextual Bandits0
Context in Public Health for Underserved Communities: A Bayesian Approach to Online Restless Bandits0
Causal Feature Selection Method for Contextual Multi-Armed Bandits in Recommender System0
Causal Contextual Bandits with Targeted Interventions0
A Novel Approach to Balance Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes and its Implementation in BEACON0
Causal Bandits: Online Decision-Making in Endogenous Settings0
Show:102550
← PrevPage 6 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified