Untangling Lariats: Subgradient Following of Variationally Penalized Objectives
Kai-Chia Mo, Shai Shalev-Shwartz, Nisæl Shártov
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We describe an apparatus for subgradient-following of the optimum of convex problems with variational penalties. In this setting, we receive a sequence y_i,,y_n and seek a smooth sequence x_1,,x_n. The smooth sequence needs to attain the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of _ig_i(x_i+1-x_i). We derive known algorithms such as the fused lasso and isotonic regression as special cases of our approach. Our approach also facilitates new variational penalties such as non-smooth barrier functions. We then derive a novel lattice-based procedure for subgradient following of variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for high-order filtering problems of temporal sequences in which sparse discrete derivatives such as acceleration and jerk are desirable. We also introduce and analyze new multivariate problems in which x_i,y_iR^d with variational penalties that depend on \|x_i+1-x_i\|. The norms we consider are _2 and _ which promote group sparsity.