SOTAVerified

Improving Energy Natural Gradient Descent through Woodbury, Momentum, and Randomization

2025-05-17Unverified0· sign in to hype

Andrés Guzmán-Cordero, Felix Dangel, Gil Goldshlager, Marius Zeinhofer

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Natural gradient methods significantly accelerate the training of Physics-Informed Neural Networks (PINNs), but are often prohibitively costly. We introduce a suite of techniques to improve the accuracy and efficiency of energy natural gradient descent (ENGD) for PINNs. First, we leverage the Woodbury formula to dramatically reduce the computational complexity of ENGD. Second, we adapt the Subsampled Projected-Increment Natural Gradient Descent algorithm from the variational Monte Carlo literature to accelerate the convergence. Third, we explore the use of randomized algorithms to further reduce the computational cost in the case of large batch sizes. We find that randomization accelerates progress in the early stages of training for low-dimensional problems, and we identify key barriers to attaining acceleration in other scenarios. Our numerical experiments demonstrate that our methods outperform previous approaches, achieving the same L^2 error as the original ENGD up to 75 faster.

Tasks

Reproductions