Activate or Not: Learning Customized Activation
Ningning Ma, Xiangyu Zhang, Ming Liu, Jian Sun
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/nmaac/aconOfficialIn paperpytorch★ 207
- github.com/DeadNud1e/Acontf★ 1
- github.com/xdhe1216/ACONtf★ 1
- github.com/qwopqwop200/acon-tf2tf★ 0
- github.com/littletomatodonkey/code_sciptspaddle★ 0
Abstract
We present a simple, effective, and general activation function we term ACON which learns to activate the neurons or not. Interestingly, we find Swish, the recent popular NAS-searched activation, can be interpreted as a smooth approximation to ReLU. Intuitively, in the same way, we approximate the more general Maxout family to our novel ACON family, which remarkably improves the performance and makes Swish a special case of ACON. Next, we present meta-ACON, which explicitly learns to optimize the parameter switching between non-linear (activate) and linear (inactivate) and provides a new design space. By simply changing the activation function, we show its effectiveness on both small models and highly optimized large models (e.g. it improves the ImageNet top-1 accuracy rate by 6.7% and 1.8% on MobileNet-0.25 and ResNet-152, respectively). Moreover, our novel ACON can be naturally transferred to object detection and semantic segmentation, showing that ACON is an effective alternative in a variety of tasks. Code is available at https://github.com/nmaac/acon.