SOTAVerified

Implicit Bias of AdamW: _ Norm Constrained Optimization

2024-04-05Unverified0· sign in to hype

Shuo Xie, Zhiyuan Li

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Adam with decoupled weight decay, also known as AdamW, is widely acclaimed for its superior performance in language modeling tasks, surpassing Adam with _2 regularization in terms of generalization and optimization. However, this advantage is not theoretically well-understood. One challenge here is that though intuitively Adam with _2 regularization optimizes the _2 regularized loss, it is not clear if AdamW optimizes a specific objective. In this work, we make progress toward understanding the benefit of AdamW by showing that it implicitly performs constrained optimization. More concretely, we show in the full-batch setting, if AdamW converges with any non-increasing learning rate schedule whose partial sum diverges, it must converge to a KKT point of the original loss under the constraint that the _ norm of the parameter is bounded by the inverse of the weight decay factor. This result is built on the observation that Adam can be viewed as a smoothed version of SignGD, which is the normalized steepest descent with respect to _ norm, and a surprising connection between normalized steepest descent with weight decay and Frank-Wolfe.

Tasks

Reproductions