SOTAVerified

ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks

2021-02-23Code Available2· sign in to hype

Jungmin Kwon, Jeongseop Kim, Hyunseo Park, In Kwon Choi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, learning algorithms motivated from sharpness of loss surface as an effective measure of generalization gap have shown state-of-the-art performances. Nevertheless, sharpness defined in a rigid region with a fixed radius, has a drawback in sensitivity to parameter re-scaling which leaves the loss unaffected, leading to weakening of the connection between sharpness and generalization gap. In this paper, we introduce the concept of adaptive sharpness which is scale-invariant and propose the corresponding generalization bound. We suggest a novel learning method, adaptive sharpness-aware minimization (ASAM), utilizing the proposed generalization bound. Experimental results in various benchmark datasets show that ASAM contributes to significant improvement of model generalization performance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10PyramidNet-272 (ASAM)Percentage correct98.68Unverified
CIFAR-100PyramidNet-272 (ASAM)Percentage correct89.9Unverified

Reproductions