Noisy PDE Training Requires Bigger PINNs
Sebastien Andre-Sloan, Anirbit Mukherjee, Matthew Colbrook
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
Physics-Informed Neural Networks (PINNs) are increasingly used to approximate solutions of partial differential equations (PDEs), particularly in high dimensions. In real-world settings, data are often noisy, making it crucial to understand when a predictor can still achieve low empirical risk. Yet, little is known about the conditions under which a PINN can do so effectively. We analyse PINNs applied to the Hamilton--Jacobi--Bellman (HJB) PDE and establish a lower bound on the network size required for the supervised PINN empirical risk to fall below the variance of noisy supervision labels. Specifically, if a predictor achieves empirical risk O(η) below σ^2 (the variance of the supervision data), then necessarily d_N d_N N_s η^2, where N_s is the number of samples and d_N the number of trainable parameters. A similar constraint holds in the fully unsupervised PINN setting when boundary labels are noisy. Thus, simply increasing the number of noisy supervision labels does not offer a ``free lunch'' in reducing empirical risk. We also give empirical studies on the HJB PDE, the Poisson PDE and the the Navier-Stokes PDE set to produce the Taylor-Green solutions. In these experiments we demonstrate that PINNs indeed need to be beyond a threshold model size for them to train to errors below σ^2. These results provide a quantitative foundation for understanding parameter requirements when training PINNs in the presence of noisy data.