SOTAVerified

Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations

2024-06-17Unverified0· sign in to hype

Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We study the computational and sample complexity of learning a target function f_*:R^dR with additive structure, that is, f_*(x) = 1M_m=1^M f_m( x, v_m), where f_1,f_2,...,f_M:RR are nonlinear link functions of single-index models (ridge functions) with diverse and near-orthogonal index features _m\_m=1^M, and the number of additive tasks M grows with the dimensionality M d^ for 0. This problem setting is motivated by the classical additive model literature, the recent representation learning theory of two-layer neural network, and large-scale pretraining where the model simultaneously acquires a large number of "skills" that are often localized in distinct parts of the trained network. We prove that a large subset of polynomial f_* can be efficiently learned by gradient descent training of a two-layer neural network, with a polynomial statistical and computational complexity that depends on the number of tasks M and the information exponent of f_m, despite the unknown link function and M growing with the dimensionality. We complement this learnability guarantee with computational hardness result by establishing statistical query (SQ) lower bounds for both the correlational SQ and full SQ algorithms.

Tasks

Reproductions