SOTAVerified

Large Spikes in Stochastic Gradient Descent: A Large-Deviations View

2026-03-10Unverified0· sign in to hype

Benjamin Gess, Daniel Heydecker

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We analyse SGD training of a shallow, fully connected network in the NTK scaling and provide a quantitative theory of the catapult phase. We identify an explicit criterion separating two behaviours: When an explicit function G, depending only on the kernel, learning rate η and data, is positive, SGD produces large NTK-flattening spikes with high probability; when G<0, their probability decays like (n/η)^-/2, for an explicitly characterised (0,). This yields a concrete parameter-dependent explanation for why such spikes may still be observed at practical widths.

Reproductions