SOTAVerified

Extending the Universal Approximation Theorem for a Broad Class of Hypercomplex-Valued Neural Networks

2022-09-06Unverified0· sign in to hype

Wington L. Vital, Guilherme Vieira, Marcos Eduardo Valle

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

The universal approximation theorem asserts that a single hidden layer neural network approximates continuous functions with any desired precision on compact sets. As an existential result, the universal approximation theorem supports the use of neural networks for various applications, including regression and classification tasks. The universal approximation theorem is not limited to real-valued neural networks but also holds for complex, quaternion, tessarines, and Clifford-valued neural networks. This paper extends the universal approximation theorem for a broad class of hypercomplex-valued neural networks. Precisely, we first introduce the concept of non-degenerate hypercomplex algebra. Complex numbers, quaternions, and tessarines are examples of non-degenerate hypercomplex algebras. Then, we state the universal approximation theorem for hypercomplex-valued neural networks defined on a non-degenerate algebra.

Tasks

Reproductions