SOTAVerified

Overcomplete Tensor Decomposition via Koszul-Young Flattenings

2024-11-21Unverified0· sign in to hype

Pravesh K. Kothari, Ankur Moitra, Alexander S. Wein

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Motivated by connections between algebraic complexity lower bounds and tensor decompositions, we investigate Koszul-Young flattenings, which are the main ingredient in recent lower bounds for matrix multiplication. Based on this tool we give a new algorithm for decomposing an n_1 n_2 n_3 tensor as the sum of a minimal number of rank-1 terms, and certifying uniqueness of this decomposition. For n_1 n_2 n_3 with n_1 and n_3/n_2 = O(1), our algorithm is guaranteed to succeed when the tensor rank is bounded by r (1-)(n_2 + n_3) for an arbitrary > 0, provided the tensor components are generically chosen. For any fixed , the runtime is polynomial in n_3. When n_2 = n_3 = n, our condition on the rank gives a factor-of-2 improvement over the classical simultaneous diagonalization algorithm, which requires r n, and also improves on the recent algorithm of Koiran (2024) which requires r 4n/3. It also improves on the PhD thesis of Persu (2018) which solves rank detection for r 3n/2. We complement our upper bounds by showing limitations, in particular that no flattening of the style we consider can surpass rank n_2 + n_3. Furthermore, for n n n tensors, we show that an even more general class of degree-d polynomial flattenings cannot surpass rank Cn for a constant C = C(d). This suggests that for tensor decompositions, the case of generic components may be fundamentally harder than that of random components, where efficient decomposition is possible even in highly overcomplete settings.

Tasks

Reproductions