SOTAVerified

Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query Complexity

2019-07-30Unverified0· sign in to hype

Feihu Huang, Shangqian Gao, Jian Pei, Heng Huang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving complex machine learning problems, where gradients of the objective functions are not available or computationally prohibitive. Recently, although many zeroth-order methods have been developed, these approaches still have two main drawbacks: 1) high function query complexity; 2) not being well suitable for solving the problems with complex penalties and constraints. To address these challenging drawbacks, in this paper, we propose a class of faster zeroth-order stochastic alternating direction method of multipliers (ADMM) methods (ZO-SPIDER-ADMM) to solve the nonconvex finite-sum problems with multiple nonsmooth penalties. Moreover, we prove that the ZO-SPIDER-ADMM methods can achieve a lower function query complexity of O(nd+dn^12^-1) for finding an -stationary point, which improves the existing best nonconvex zeroth-order ADMM methods by a factor of O(d^13n^16), where n and d denote the sample size and data dimension, respectively. At the same time, we propose a class of faster zeroth-order online ADMM methods (ZOO-ADMM+) to solve the nonconvex online problems with multiple nonsmooth penalties. We also prove that the proposed ZOO-ADMM+ methods achieve a lower function query complexity of O(d^-32), which improves the existing best result by a factor of O(^-12). Extensive experimental results on the structure adversarial attack on black-box deep neural networks demonstrate the efficiency of our new algorithms.

Tasks

Reproductions