On the distance between two neural networks and the stability of learning
2020-02-09NeurIPS 2020Code Available1· sign in to hype
Jeremy Bernstein, Arash Vahdat, Yisong Yue, Ming-Yu Liu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/jxbz/fromageOfficialIn paperpytorch★ 128
- github.com/jxbz/agdpytorch★ 217
Abstract
This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions. The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks. Since the resulting learning rule seems to require little to no learning rate tuning, it may unlock a simpler workflow for training deeper and more complex neural networks. The Python code used in this paper is here: https://github.com/jxbz/fromage.