SOTAVerified

neuralGAM: Explainable generalized additive neural networks with independent neural network training

2024-07-08Statistics and Computing 2024Code Available0· sign in to hype

Ines Ortega-Fernandez, Marta Sestelo, Nora M. Villanueva

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Neural Networks are one of the most popular methods nowadays given their high performance on diverse tasks, such as computer vision, anomaly detection, computer-aided disease detection and diagnosis, or natural language processing. However, it is usually unclear how neural networks make decisions, and current methods that try to provide interpretability to neural networks are not robust enough. We introduce neuralGAM, a fully explainable neural network framework based on Generalized Additive Models, which trains a different neural network to estimate and visualize the contribution of each feature to the response variable. In contrast to other Neural Additive Models implementations, in neuralGAM neural networks are trained independently leveraging the local scoring and backfitting algorithms to ensure that the Generalized Additive Model converges and it is additive. The resultant model is a highly accurate and explainable deep learning model, which can be used for high-risk AI practices where decision-making should be based on accountable and interpretable algorithms. neuralGAM is also available as an R package at the CRAN: https://cran.r-project.org/web/packages/neuralGAM/index.html

Tasks

Reproductions