SOTAVerified

Adversarial Bandit Optimization with Globally Bounded Perturbations to Linear Losses

2026-03-27Unverified0· sign in to hype

Zhuoyu Cheng, Kohei Hatano, Eiji Takimoto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study a class of adversarial bandit optimization problems in which the loss functions may be non-convex and non-smooth. In each round, the learner observes a loss that consists of an underlying linear component together with an additional perturbation applied after the learner selects an action. The perturbations are measured relative to the linear losses and are constrained by a global budget that bounds their cumulative magnitude over time. Under this model, we establish both expected and high-probability regret guarantees. As a special case of our analysis, we recover an improved high-probability regret bound for classical bandit linear optimization, which corresponds to the setting without perturbations. We further complement our upper bounds by proving a lower bound on the expected regret.

Reproductions