SOTAVerified

Duality between subgradient and conditional gradient methods

2012-11-27Unverified0· sign in to hype

Francis Bach

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Given a convex optimization problem and its dual, there are many possible first-order algorithms. In this paper, we show the equivalence between mirror descent algorithms and algorithms generalizing the conditional gradient method. This is done through convex duality, and implies notably that for certain problems, such as for supervised machine learning problems with non-smooth losses or problems regularized by non-smooth regularizers, the primal subgradient method and the dual conditional gradient method are formally equivalent. The dual interpretation leads to a form of line search for mirror descent, as well as guarantees of convergence for primal-dual certificates.

Tasks

Reproductions