SOTAVerified

Binary Stochastic Filtering: feature selection and beyond

2020-07-08Code Available0· sign in to hype

Andrii Trelin, Aleš Procházka

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Feature selection is one of the most decisive tools in understanding data and machine learning models. Among other methods, sparsity induced by L^1 penalty is one of the simplest and best studied approaches to this problem. Although such regularization is frequently used in neural networks to achieve sparsity of weights or unit activations, it is unclear how it can be employed in the feature selection problem. This work aims at extending the neural network with ability to automatically select features by rethinking how the sparsity regularization can be used, namely, by stochastically penalizing feature involvement instead of the layer weights. The proposed method has demonstrated superior efficiency when compared to a few classical methods, achieved with minimal or no computational overhead, and can be directly applied to any existing architecture. Furthermore, the method is easily generalizable for neuron pruning and selection of regions of importance for spectral data.

Tasks

Reproductions