SOTAVerified

Can Learning Be Explained By Local Optimality In Robust Low-rank Matrix Recovery?

2023-02-21Unverified0· sign in to hype

Jianhao Ma, Salar Fattahi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We explore the local landscape of low-rank matrix recovery, focusing on reconstructing a d_1 d_2 matrix X^ with rank r from m linear measurements, some potentially noisy. When the noise is distributed according to an outlier model, minimizing a nonsmooth _1-loss with a simple sub-gradient method can often perfectly recover the ground truth matrix X^. Given this, a natural question is what optimization property (if any) enables such learning behavior. The most plausible answer is that the ground truth X^ manifests as a local optimum of the loss function. In this paper, we provide a strong negative answer to this question, showing that, under moderate assumptions, the true solutions corresponding to X^ do not emerge as local optima, but rather as strict saddle points -- critical points with strictly negative curvature in at least one direction. Our findings challenge the conventional belief that all strict saddle points are undesirable and should be avoided.

Tasks

Reproductions