SOTAVerified

SMU: smooth activation function for deep networks using smoothing maximum technique

2021-11-08Code Available1· sign in to hype

Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Deep learning researchers have a keen interest in proposing two new novel activation functions which can boost network performance. A good choice of activation function can have significant consequences in improving network performance. A handcrafted activation is the most common choice in neural network models. ReLU is the most common choice in the deep learning community due to its simplicity though ReLU has some serious drawbacks. In this paper, we have proposed a new novel activation function based on approximation of known activation functions like Leaky ReLU, and we call this function Smooth Maximum Unit (SMU). Replacing ReLU by SMU, we have got 6.22% improvement in the CIFAR100 dataset with the ShuffleNet V2 model.

Tasks

Reproductions