SOTAVerified

Open Problem: Tight Bounds for Kernelized Multi-Armed Bandits with Bernoulli Rewards

2024-07-08Unverified0· sign in to hype

Marco Mussi, Simone Drago, Alberto Maria Metelli

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We consider Kernelized Bandits (KBs) to optimize a function f : X [0,1] belonging to the Reproducing Kernel Hilbert Space (RKHS) H_k. Mainstream works on kernelized bandits focus on a subgaussian noise model in which observations of the form f(x_t)+_t, being _t a subgaussian noise, are available (Chowdhury and Gopalan, 2017). Differently, we focus on the case in which we observe realizations y_t Ber(f(x_t)) sampled from a Bernoulli distribution with parameter f(x_t). While the Bernoulli model has been investigated successfully in multi-armed bandits (Garivier and Capp\'e, 2011), logistic bandits (Faury et al., 2022), bandits in metric spaces (Magureanu et al., 2014), it remains an open question whether tight results can be obtained for KBs. This paper aims to draw the attention of the online learning community to this open problem.

Tasks

Reproductions