SOTAVerified

Best-of-Both-Worlds Multi-Dueling Bandits: Unified Algorithms for Stochastic and Adversarial Preferences under Condorcet and Borda Objectives

2026-03-19Unverified0· sign in to hype

S. Akash, Pratik Gajane, Jawar Singh

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Multi-dueling bandits, where a learner selects m 2 arms per round and observes only the winner, arise naturally in many applications including ranking and recommendation systems, yet a fundamental question has remained open: can a single algorithm perform optimally in both stochastic and adversarial environments, without knowing which regime it faces? We answer this affirmatively, providing the first best-of-both-worlds algorithms for multi-dueling bandits under both Condorcet and Borda objectives. For the Condorcet setting, we propose MetaDueling, a black-box reduction that converts any dueling bandit algorithm into a multi-dueling bandit algorithm by transforming multi-way winner feedback into an unbiased pairwise signal. Instantiating our reduction with Versatile-DB yields the first best-of-both-worlds algorithm for multi-dueling bandits: it achieves O(KT) pseudo-regret against adversarial preferences and the instance-optimal O\!(_i a^ TΔ_i) pseudo-regret under stochastic preferences, both simultaneously and without prior knowledge of the regime. For the Borda setting, we propose , a stochastic-and-adversarial algorithm that achieves O(K^2 KT + K ^2 T + _i: Δ_i^B > 0 K KT(Δ_i^B)^2) regret in stochastic environments and O(K T KT + K^1/3 T^2/3 ( K)^1/3) regret against adversaries, again without prior knowledge of the regime. We complement our upper bounds with matching lower bounds for the Condorcet setting. For the Borda setting, our upper bounds are near-optimal with respect to the lower bounds (within a factor of K) and match the best-known results in the literature.

Reproductions