SOTAVerified

Orthogonal Transforms in Neural Networks Amount to Effective Regularization

2023-05-10Code Available0· sign in to hype

Krzysztof Zając, Wojciech Sopot, Paweł Wachel

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We consider applications of neural networks in nonlinear system identification and formulate a hypothesis that adjusting general network structure by incorporating frequency information or other known orthogonal transform, should result in an efficient neural network retaining its universal properties. We show that such a structure is a universal approximator and that using any orthogonal transform in a proposed way implies regularization during training by adjusting the learning rate of each parameter individually. We empirically show in particular, that such a structure, using the Fourier transform, outperforms equivalent models without orthogonality support.

Tasks

Reproductions