SOTAVerified

Double Adaptive Stochastic Gradient Optimization

2018-11-06Unverified0· sign in to hype

Kin Gutierrez, Jin Li, Cristian Challu, Artur Dubrawski

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Adaptive moment methods have been remarkably successful in deep learning optimization, particularly in the presence of noisy and/or sparse gradients. We further the advantages of adaptive moment techniques by proposing a family of double adaptive stochastic gradient methods~DASGrad. They leverage the complementary ideas of the adaptive moment algorithms widely used by deep learning community, and recent advances in adaptive probabilistic algorithms.We analyze the theoretical convergence improvements of our approach in a stochastic convex optimization setting, and provide empirical validation of our findings with convex and non convex objectives. We observe that the benefits of~DASGrad increase with the model complexity and variability of the gradients, and we explore the resulting utility in extensions of distribution-matching multitask learning.

Tasks

Reproductions