Dual Space Preconditioning for Gradient Descent in the Overparameterized Regime
Reza Ghane, Danil Akhtiamov, Babak Hassibi
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this work we study the convergence properties of the Dual Space Preconditioned Gradient Descent, encompassing optimizers such as Normalized Gradient Descent, Gradient Clipping and Adam. We consider preconditioners of the form K, where K: R^p R is convex and assume that the latter is applied to train an over-parameterized linear model with loss of the form (X W - Y), for weights W R^d k, labels Y R^n k and data X R^n d. Under the aforementioned assumptions, we prove that the iterates of the preconditioned gradient descent always converge to a point W_ R^d k satisfying XW_ = Y. Our proof techniques are of independent interest as we introduce a novel version of the Bregman Divergence with accompanying identities that allow us to establish convergence. We also study the implicit bias of Dual Space Preconditioned Gradient Descent. First, we demonstrate empirically that, for general K(), W_ depends on the chosen learning rate, hindering a precise characterization of the implicit bias. Then, for preconditioners of the form K(G) = h(\|G\|_F), known as isotropic preconditioners, we show that W_ minimizes \|W_ - W_0\|_F^2 subject to XW_ = Y, where W_0 is the initialization. Denoting the convergence point of GD initialized at W_0 by W_GD, , we thus note W_ = W_GD, for isotropic preconditioners. Finally, we show that a similar fact holds for general preconditioners up to a multiplicative constant, namely, \|W_0 - W_\|_F c \|W_0 - W_GD, \|_F for a constant c>0.