SOTAVerified

Clipped Gradient Methods for Nonsmooth Convex Optimization under Heavy-Tailed Noise: A Refined Analysis

2026-03-19Unverified0· sign in to hype

Zijian Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Optimization under heavy-tailed noise has become popular recently, since it better fits many modern machine learning tasks, as captured by empirical observations. Concretely, instead of a finite second moment on gradient noise, a bounded p-th moment where p(1,2] has been recognized to be more realistic (say being upper bounded by σ_ l^ p for some σ_ l0). A simple yet effective operation, gradient clipping, is known to handle this new challenge successfully. Specifically, Clipped Stochastic Gradient Descent (Clipped SGD) guarantees a high-probability rate O(σ_ l(1/δ)T^1/ p-1) (resp. O(σ_ l^2^2(1/δ)T^2/ p-2)) for nonsmooth convex (resp. strongly convex) problems, where δ(0,1] is the failure probability and TN is the time horizon. In this work, we provide a refined analysis for Clipped SGD and offer two faster rates, O(σ_ ld_ eff^-1/2 p^1-1/ p(1/δ)T^1/ p-1) and O(σ_ l^2d_ eff^-1/ p^2-2/ p(1/δ)T^2/ p-2), than the aforementioned best results, where d_ eff1 is a quantity we call the generalized effective dimension. Our analysis improves upon the existing approach on two sides: better utilization of Freedman's inequality and finer bounds for clipping error under heavy-tailed noise. In addition, we extend the refined analysis to convergence in expectation and obtain new rates that break the known lower bounds. Lastly, to complement the study, we establish new lower bounds for both high-probability and in-expectation convergence. Notably, the in-expectation lower bounds match our new upper bounds, indicating the optimality of our refined analysis for convergence in expectation.

Reproductions