SOTAVerified

Post-synaptic potential regularization has potential

2019-07-19Code Available0· sign in to hype

Enzo Tartaglione, Daniele Perlo, Marco Grangetto

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Improving generalization is one of the main challenges for training deep neural networks on classification tasks. In particular, a number of techniques have been proposed, aiming to boost the performance on unseen data: from standard data augmentation techniques to the _2 regularization, dropout, batch normalization, entropy-driven SGD and many more.\\ In this work we propose an elegant, simple and principled approach: post-synaptic potential regularization (PSP). We tested this regularization on a number of different state-of-the-art scenarios. Empirical results show that PSP achieves a classification error comparable to more sophisticated learning strategies in the MNIST scenario, while improves the generalization compared to _2 regularization in deep architectures trained on CIFAR-10.

Tasks

Reproductions