SOTAVerified

Minor First, Major Last: A Depth-Induced Implicit Bias of Sharpness-Aware Minimization

2026-03-09Unverified0· sign in to hype

Chaewon Moon, Dongkuk Si, Chulhee Yun

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the implicit bias of Sharpness-Aware Minimization (SAM) when training L-layer linear diagonal networks on linearly separable binary classification. For linear models (L=1), both _- and _2-SAM recover the _2 max-margin classifier, matching gradient descent (GD). However, for depth L = 2, the behavior changes drastically -- even on a single-example dataset. For _-SAM, the limit direction depends critically on initialization and can converge to 0 or to any standard basis vector, in stark contrast to GD, whose limit aligns with the basis vector of the dominant data coordinate. For _2-SAM, we show that although its limit direction matches the _1 max-margin solution as in the case of GD, its finite-time dynamics exhibit a phenomenon we call "sequential feature amplification", in which the predictor initially relies on minor coordinates and gradually shifts to larger ones as training proceeds or initialization increases. Our theoretical analysis attributes this phenomenon to _2-SAM's gradient normalization factor applied in its perturbation, which amplifies minor coordinates early and allows major ones to dominate later, giving a concrete example where infinite-time implicit-bias analyses are insufficient. Synthetic and real-data experiments corroborate our findings.

Reproductions