SOTAVerified

Tight Bounds for Bandit Combinatorial Optimization

2017-02-24Unverified0· sign in to hype

Alon Cohen, Tamir Hazan, Tomer Koren

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We revisit the study of optimal regret rates in bandit combinatorial optimization---a fundamental framework for sequential decision making under uncertainty that abstracts numerous combinatorial prediction problems. We prove that the attainable regret in this setting grows as (k^3/2dT) where d is the dimension of the problem and k is a bound over the maximal instantaneous loss, disproving a conjecture of Audibert, Bubeck, and Lugosi (2013) who argued that the optimal rate should be of the form (kdT). Our bounds apply to several important instances of the framework, and in particular, imply a tight bound for the well-studied bandit shortest path problem. By that, we also resolve an open problem posed by Cesa-Bianchi and Lugosi (2012).

Tasks

Reproductions