The Best Arm Evades: Near-optimal Multi-pass Streaming Lower Bounds for Pure Exploration in Multi-armed Bandits
2023-09-06Unverified0· sign in to hype
Sepehr Assadi, Chen Wang
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We give a near-optimal sample-pass trade-off for pure exploration in multi-armed bandits (MABs) via multi-pass streaming algorithms: any streaming algorithm with sublinear memory that uses the optimal sample complexity of O(n^2) requires ((1/)(1/)) passes. Here, n is the number of arms and is the reward gap between the best and the second-best arms. Our result matches the O((1))-pass algorithm of Jin et al. [ICML'21] (up to lower order terms) that only uses O(1) memory and answers an open question posed by Assadi and Wang [STOC'20].