SOTAVerified

Convergence Rate Analysis of LION

2024-11-12Unverified0· sign in to hype

Yiming Dong, Huan Li, Zhouchen Lin

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The LION (evoLved sIgn mOmeNtum) optimizer for deep neural network training was found by Google via program search, with the simple sign update yet showing impressive performance in training large scale networks. Although previous studies have investigated its convergence properties, a comprehensive analysis, especially the convergence rate, is still desirable. Recognizing that LION can be regarded as solving a specific constrained problem, this paper focuses on demonstrating its convergence to the Karush-Kuhn-Tucker (KKT) point at the rate of O(dK^-1/4) measured by gradient _1 norm, where d is the problem dimension and K is the number of iteration steps. Step further, we remove the constraint and establish that LION converges to the critical point of the general unconstrained problem at the same rate. This rate not only delivers the currently optimal dependence on the problem dimension d but also tightly matches the theoretical lower bound for nonconvex stochastic optimization algorithms, which is typically measured using the gradient _2 norm, with respect to the number of iterations K. Through extensive experiments, we not only demonstrate that LION achieves lower loss and higher performance compared to standard SGD, but also empirically confirm that the gradient _1/_2 norm ratio aligns with (d), thus proving that our convergence rate matches the theoretical lower bound with respect to d in the empirical sense.

Tasks

Reproductions