SOTAVerified

Implicit Regularization via Neural Feature Alignment

2020-08-03NeurIPS Workshop DL-IG 2020Code Available1· sign in to hype

Aristide Baratin, Thomas George, César Laurent, R. Devon Hjelm, Guillaume Lajoie, Pascal Vincent, Simon Lacoste-Julien

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. This can be interpreted as a combined mechanism of feature selection and compression. By extrapolating a new analysis of Rademacher complexity bounds for linear models, we motivate and study a heuristic complexity measure that captures this phenomenon, in terms of sequences of tangent kernel classes along optimization paths.

Tasks

Reproductions