SOTAVerified

Multi-Armed Bandits

Multi-armed bandits refer to a task where a fixed amount of resources must be allocated between competing resources that maximizes expected gain. Typically these problems involve an exploration/exploitation trade-off.

( Image credit: Microsoft Research )

Papers

Showing 551600 of 1262 papers

TitleStatusHype
Access Probability Optimization in RACH: A Multi-Armed Bandits Approach0
A Bandit Approach to Sequential Experimental Design with False Discovery Control0
Coordinated Attacks against Contextual Bandits: Fundamental Limits and Defense Mechanisms0
Cooperative Stochastic Multi-agent Multi-armed Bandits Robust to Adversarial Corruptions0
Preferences Evolve And So Should Your Bandits: Bandits with Evolving States for Online Platforms0
Cooperative Multi-agent Bandits: Distributed Algorithms with Optimal Individual Regret and Constant Communication Costs0
Convex Hull Monte-Carlo Tree Search0
Bandits Warm-up Cold Recommender Systems0
Algorithms for multi-armed bandit problems0
Continuous-Time Multi-Armed Bandits with Controlled Restarts0
Continuous K-Max Bandits0
Bandit Social Learning: Exploration under Myopic Behavior0
Context Uncertainty in Contextual Bandits with Applications to Recommender Systems0
Contextual Restless Multi-Armed Bandits with Application to Demand Response Decision-Making0
Bandits meet Computer Architecture: Designing a Smartly-allocated Cache0
Algorithms for Differentially Private Multi-Armed Bandits0
Contextual Pandora's Box0
Contextual Online Decision Making with Infinite-Dimensional Functional Regression0
Bandits for Learning to Explain from Explanations0
Contextual Multinomial Logit Bandits with General Value Functions0
Bandits Don’t Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits0
A KL-LUCB algorithm for Large-Scale Crowdsourcing0
Contextual Multi-Armed Bandits for Causal Marketing0
Contextual memory bandit for pro-active dialog engagement0
Bandits Don't Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits0
Contextual Linear Bandits with Delay as Payoff0
Contextual Information-Directed Sampling0
Bandit Regret Scaling with the Effective Loss Range0
A Hybrid Meta-Learning and Multi-Armed Bandit Approach for Context-Specific Multi-Objective Recommendation Optimization0
Adaptive Data Augmentation for Thompson Sampling0
A conversion theorem and minimax optimality for continuum contextual bandits0
Contextual Combinatorial Multi-armed Bandits with Volatile Arms and Submodular Reward0
BanditRank: Learning to Rank Using Contextual Bandits0
Contextual Combinatorial Conservative Bandits0
Contextual Causal Bayesian Optimisation0
BanditQ: Fair Bandits with Guaranteed Rewards0
A Hierarchical Nearest Neighbour Approach to Contextual Bandits0
Contextual Bandit with Herding Effects: Algorithms and Recommendation Applications0
Individual Regret in Cooperative Stochastic Multi-Armed Bandits0
Individual Regret in Cooperative Nonstochastic Multi-Armed Bandits0
Contextual bandits with surrogate losses: Margin bounds and efficient algorithms0
Indexed Minimum Empirical Divergence-Based Algorithms for Linear Bandits0
Indexability and Rollout Policy for Multi-State Partially Observable Restless Bandits0
Increasing Students' Engagement to Reminder Emails Through Multi-Armed Bandits0
Contextual Bandits with Stage-wise Constraints0
A General Theory of the Stochastic Linear Bandit and Its Applications0
In-Domain African Languages Translation Using LLMs and Multi-armed Bandits0
Inference for Batched Bandits0
Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems0
Contextual Bandits with Sparse Data in Web setting0
Show:102550
← PrevPage 12 of 26Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1NeuralLinear FullPosterior-MRCumulative regret1.92Unverified
2Linear FullPosterior-MRCumulative regret1.82Unverified