SOTAVerified

Leveraging Non-uniformity in First-order Non-convex Optimization

2021-05-13Unverified0· sign in to hype

Jincheng Mei, Yue Gao, Bo Dai, Csaba Szepesvari, Dale Schuurmans

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Classical global convergence results for first-order methods rely on uniform smoothness and the ojasiewicz inequality. Motivated by properties of objective functions that arise in machine learning, we propose a non-uniform refinement of these notions, leading to Non-uniform Smoothness (NS) and Non-uniform ojasiewicz inequality (N). The new definitions inspire new geometry-aware first-order methods that are able to converge to global optimality faster than the classical (1/t^2) lower bounds. To illustrate the power of these geometry-aware methods and their corresponding non-uniform analysis, we consider two important problems in machine learning: policy gradient optimization in reinforcement learning (PG), and generalized linear model training in supervised learning (GLM). For PG, we find that normalizing the gradient ascent method can accelerate convergence to O(e^-t) while incurring less overhead than existing algorithms. For GLM, we show that geometry-aware normalized gradient descent can also achieve a linear convergence rate, which significantly improves the best known results. We additionally show that the proposed geometry-aware descent methods escape landscape plateaus faster than standard gradient descent. Experimental results are used to illustrate and complement the theoretical findings.

Tasks

Reproductions