SOTAVerified

Coherent Gradients: An Approach to Understanding Generalization in Gradient Descent-based Optimization

2020-02-25ICLR 2020Unverified0· sign in to hype

Satrajit Chatterjee

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

An open question in the Deep Learning community is why neural networks trained with Gradient Descent generalize well on real datasets even though they are capable of fitting random data. We propose an approach to answering this question based on a hypothesis about the dynamics of gradient descent that we call Coherent Gradients: Gradients from similar examples are similar and so the overall gradient is stronger in certain directions where these reinforce each other. Thus changes to the network parameters during training are biased towards those that (locally) simultaneously benefit many examples when such similarity exists. We support this hypothesis with heuristic arguments and perturbative experiments and outline how this can explain several common empirical observations about Deep Learning. Furthermore, our analysis is not just descriptive, but prescriptive. It suggests a natural modification to gradient descent that can greatly reduce overfitting.

Tasks

Reproductions