SOTAVerified

Radial Müntz-Szász Networks: Neural Architectures with Learnable Power Bases for Multidimensional Singularities

2026-03-09Unverified0· sign in to hype

Gnankan Landry Regis N'guessan, Bum Jun Kim

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Radial singular fields, such as 1/r, r, and crack-tip profiles, are difficult to model with current coordinate-separable neural architectures. We formally establish this result: any C^2 function that is both radial and additively separable must be quadratic, establishing a fundamental obstruction for coordinate-wise power-law models. Motivated by this result, we introduce Radial Müntz-Szász Networks (RMN), which represent fields as linear combinations of learnable radial powers r^μ, including negative exponents, together with a limit-stable log-primitive for exact r behavior. RMN admits closed-form spatial gradients and Laplacians, enabling physics-informed learning on punctured domains. Across ten 2D and 3D benchmarks, RMN achieves between 1.5 and 51 times lower RMSE than MLPs and between 10 and 100 times lower RMSE than SIREN, while using only 27 parameters, compared with 33,537 for MLPs and 8,577 for SIREN. We extend RMN to incorporate angular dependence (RMN-Angular) and to handle multiple sources with learnable centers (RMN-MC), whose source-center recovery errors fall below 10^-4. We also report controlled failures on smooth, strongly non-radial targets to delineate RMN's operating regime.

Reproductions