Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion
Jianhao Ma, Salar Fattahi
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study the problem of symmetric matrix completion, where the goal is to reconstruct a positive semidefinite matrix X^ R^d d of rank-r, parameterized by UU^, from only a subset of its observed entries. We show that the vanilla gradient descent (GD) with small initialization provably converges to the ground truth X^ without requiring any explicit regularization. This convergence result holds true even in the over-parameterized scenario, where the true rank r is unknown and conservatively over-estimated by a search rank r' r. The existing results for this problem either require explicit regularization, a sufficiently accurate initial point, or exact knowledge of the true rank r. In the over-parameterized regime where r' r, we show that, with (dr^9) observations, GD with an initial point \|U_0\| converges near-linearly to an -neighborhood of X^. Consequently, smaller initial points result in increasingly accurate solutions. Surprisingly, neither the convergence rate nor the final accuracy depends on the over-parameterized search rank r', and they are only governed by the true rank r. In the exactly-parameterized regime where r'=r, we further enhance this result by proving that GD converges at a faster rate to achieve an arbitrarily small accuracy >0, provided the initial point satisfies \|U_0\| = O(1/d). At the crux of our method lies a novel weakly-coupled leave-one-out analysis, which allows us to establish the global convergence of GD, extending beyond what was previously possible using the classical leave-one-out analysis.