SGDR: Stochastic Gradient Descent with Warm Restarts
Ilya Loshchilov, Frank Hutter
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/loshchil/SGDROfficialIn papernone★ 0
- github.com/rwightman/pytorch-image-modelspytorch★ 36,538
- github.com/Harshvardhan1/cyclic-learning-schedulers-pytorchpytorch★ 35
- github.com/buptlwz/mabppytorch★ 3
- github.com/vinuni-vishc/transformer-gait-analysispytorch★ 3
- github.com/pwc-1/Paper-9/tree/main/3/SGDRmindspore★ 0
- github.com/Gjiangtao/A-Deep-Supervised-Edge-Optimization-Algorithm-for-Salt-Body-Segmentationpytorch★ 0
- github.com/MrtnMndt/Rethinking_CNN_Layerwise_Feature_Amountspytorch★ 0
- github.com/abhuse/cyclic-cosine-decaypytorch★ 0
- github.com/jolibrain/caffenone★ 0
Abstract
Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR