SOTAVerified

Rising Rested Bandits: Lower Bounds and Efficient Algorithms

2024-11-06Unverified0· sign in to hype

Marco Fiandri, Alberto Maria Metelli, Francesco Trov`o

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper is in the field of stochastic Multi-Armed Bandits (MABs), i.e. those sequential selection techniques able to learn online using only the feedback given by the chosen option (a.k.a. arm). We study a particular case of the rested bandits in which the arms' expected reward is monotonically non-decreasing and concave. We study the inherent sample complexity of the regret minimization problem by deriving suitable regret lower bounds. Then, we design an algorithm for the rested case R-ed-UCB, providing a regret bound depending on the properties of the instance and, under certain circumstances, of O(T^23). We empirically compare our algorithms with state-of-the-art methods for non-stationary MABs over several synthetically generated tasks and an online model selection problem for a real-world dataset

Tasks

Reproductions