Tight conditions for when the NTK approximation is valid
2023-05-22Unverified0· sign in to hype
Enric Boix-Adsera, Etai Littwin
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. 2019, we show that rescaling the model by a factor of = O(T) suffices for the NTK approximation to be valid until training time T. Our bound is tight and improves on the previous bound of Chizat et al. 2019, which required a larger rescaling factor of = O(T^2).