Finding Approximate Local Minima Faster than Gradient Descent
2016-11-03Code Available0· sign in to hype
Naman Agarwal, Zeyuan Allen-Zhu, Brian Bullins, Elad Hazan, Tengyu Ma
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/vlad17/runlmcnone★ 0
Abstract
We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.