ADADELTA: An Adaptive Learning Rate Method
2012-12-22Code Available1· sign in to hype
Matthew D. Zeiler
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/arobey1/mbrdlpytorch★ 16
- github.com/harshraj11584/Paper-Implementation-Overview-Gradient-Descent-Optimization-Sebastian-Rudernone★ 0
- github.com/mohamedameen93/German-Traffic-Sign-Classification-Using-TensorFlowtf★ 0
- github.com/harshraj11584/Paper-Implementation-Overview-Gradient-Descent-Optimization-Algorithmsnone★ 0
- github.com/rahulsonone1234/Traffic-Sign-Recognitiontf★ 0
- github.com/nyatadecocoa/Machine_Learning_Midterm_Assignment_2019tf★ 0
- github.com/rahulsonone1234/traffic-sign-tf★ 0
- github.com/bratao/PySeqLabnone★ 0
- github.com/Arko98/Gradient-Descent-Algorithmsnone★ 0
- github.com/hx123123/optimizationnone★ 0
Abstract
We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.