Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks
2019-05-27Code Available1· sign in to hype
Boris Ginsburg, Patrice Castonguay, Oleksii Hrinchuk, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, Huyen Nguyen, Yang Zhang, Jonathan M. Cohen
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/Edresson/VoiceSplitpytorch★ 266
- github.com/NVIDIA/OpenSeq2SeqIn papertf★ 0
- github.com/convergence-lab/novogradpytorch★ 0
Abstract
We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam or AdamW. Additionally, NovoGrad (1) is robust to the choice of learning rate and weight initialization, (2) works well in a large batch setting, and (3) has two times smaller memory footprint than Adam.