Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity
Arthur Jacot, François Ged, Berfin Şimşek, Clément Hongler, Franck Gabriel
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The dynamics of Deep Linear Networks (DLNs) is dramatically affected by the variance ^2 of the parameters at initialization _0. For DLNs of width w, we show a phase transition w.r.t. the scaling of the variance ^2=w^- as w: for large variance (<1), _0 is very close to a global minimum but far from any saddle point, and for small variance (>1), _0 is close to a saddle point and far from any global minimum. While the first case corresponds to the well-studied NTK regime, the second case is less understood. This motivates the study of the case +, where we conjecture a Saddle-to-Saddle dynamics: throughout training, gradient descent visits the neighborhoods of a sequence of saddles, each corresponding to linear maps of increasing rank, until reaching a sparse global minimum. We support this conjecture with a theorem for the dynamics between the first two saddles, as well as some numerical experiments.