Variance-reduced first-order methods for deterministically constrained stochastic nonconvex optimization with strong convergence guarantees
Zhaosong Lu, Sanyou Mei, Yifeng Xiao
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
In this paper, we study a class of deterministically constrained stochastic optimization problems. Existing methods typically aim to find an -stochastic stationary point, where the expected violations of both constraints and first-order stationarity are within a prescribed accuracy . However, in many practical applications, it is crucial that the constraints be nearly satisfied with certainty, making such an -stochastic stationary point potentially undesirable due to the risk of significant constraint violations. To address this issue, we propose single-loop variance-reduced stochastic first-order methods, where the stochastic gradient of the stochastic component is computed using either a truncated recursive momentum scheme or a truncated Polyak momentum scheme for variance reduction, while the gradient of the deterministic component is computed exactly. Under the error bound condition with a parameter 1 and other suitable assumptions, we establish that these methods respectively achieve a sample and first-order operation complexity of O(^-\+2, 2\) and O(^-\4, 2\) for finding a stronger -stochastic stationary point, where the constraint violation is within with certainty, and the expected violation of first-order stationarity is within . For =1, these complexities reduce to O(^-3) and O(^-4) respectively, which match, up to a logarithmic factor, the best-known complexities achieved by existing methods for finding an -stochastic stationary point of unconstrained smooth stochastic optimization problems.