SOTAVerified

Quick-Draw Bandits: Quickly Optimizing in Nonstationary Environments with Extremely Many Arms

2025-05-30Unverified0· sign in to hype

Derek Everett, Fred Lu, Edward Raff, Fernando Camacho, James Holt

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Canonical algorithms for multi-armed bandits typically assume a stationary reward environment where the size of the action space (number of arms) is small. More recently developed methods typically relax only one of these assumptions: existing non-stationary bandit policies are designed for a small number of arms, while Lipschitz, linear, and Gaussian process bandit policies are designed to handle a large (or infinite) number of arms in stationary reward environments under constraints on the reward function. In this manuscript, we propose a novel policy to learn reward environments over a continuous space using Gaussian interpolation. We show that our method efficiently learns continuous Lipschitz reward functions with O^*(T) cumulative regret. Furthermore, our method naturally extends to non-stationary problems with a simple modification. We finally demonstrate that our method is computationally favorable (100-10000x faster) and experimentally outperforms sliding Gaussian process policies on datasets with non-stationarity and an extremely large number of arms.

Tasks

Reproductions