SOTAVerified

Why Gradients Rapidly Increase Near the End of Training

2025-06-02Unverified0· sign in to hype

Aaron Defazio

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

During long-duration Large Language Model (LLM) training runs the gradient norm increases rapidly near the end of training. In this short note, we show that this increase is due to an unintended interaction between weight decay, normalization layers, and the learning rate schedule. We propose a simple correction that fixes this behavior while also resulting in lower loss values throughout training.

Tasks

Reproductions