Multimodal Bandits: Regret Lower Bounds and Optimal Algorithms
2025-10-29Code Available0· sign in to hype
William Réveillard, Richard Combes
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/wilrev/multimodalbanditsOfficialIn paper★ 0
Abstract
We consider a stochastic multi-armed bandit problem with i.i.d. rewards where the expected reward function is multimodal with at most m modes. We propose the first known computationally tractable algorithm for computing the solution to the Graves-Lai optimization problem, which in turn enables the implementation of asymptotically optimal algorithms for this bandit problem. The code for the proposed algorithms is publicly available at https://github.com/wilrev/MultimodalBandits