SOTAVerified

BumpNet: A Sparse MLP Framework for Learning PDE Solutions

2026-03-03Unverified0· sign in to hype

Shao-Ting Chiu, Ioannis G. Kevrekidis, Ulisses Braga-Neto

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce BumpNet, a sparse multilayer perceptron (MLP) framework for PDE numerical solution and operator learning. BumpNet is based on basis function expansion, which makes them superficially similar to radial-basis function (RBF) networks. However, the basis functions in BumpNet are constructed from ordinary sigmoid activation functions in a sparse multi-layer framework. This makes BumpNet a MLP, not a RBF neural network, enabling the efficient use of modern training techniques optimized for MLPs. All parameters of the basis functions, including shape, location, and amplitude, are fully trainable. Model parsimony is encouraged through a basis function pruning scheme. BumpNet is a general meshless framework that can be combined with existing neural architectures for learning PDE solutions: here, we propose Bump-PINNs (BumpNet with physics-informed neural networks) for solving general PDEs; Bump-EDNN (BumpNet with evolutionary deep neural networks) to solve time-evolution PDEs; and Bump-DeepONet (BumpNet with deep operator networks) for PDE operator learning. We prove that BumpNets and Bump-DeepONets are universal approximators of continuous functions and continuous operators, respectively. Bump-PINNs are trained using the same collocation-based approach used by PINNs; Bump-EDNN uses a BumpNet only in the spatial domain and uses EDNNs to advance the solution in time; while Bump-DeepONets employ a BumpNet regression network as the trunk network of a DeepONet. Extensive numerical experiments demonstrate the efficiency and accuracy of BumpNets.

Reproductions