SOTAVerified

On non-approximability of zero loss global L^2 minimizers by gradient descent in Deep Learning

2023-11-13Unverified0· sign in to hype

Thomas Chen, Patricia Muñoz Ewald

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We analyze geometric aspects of the gradient descent algorithm in Deep Learning (DL), and give a detailed discussion of the circumstance that in underparametrized DL networks, zero loss minimization can generically not be attained. As a consequence, we conclude that the distribution of training inputs must necessarily be non-generic in order to produce zero loss minimizers, both for the method constructed in [Chen-Munoz Ewald 2023, 2024], or for gradient descent [Chen 2025] (which assume clustering of training data).

Tasks

Reproductions