SOTAVerified

Go Small and Similar: A Simple Output Decay Brings Better Performance

2021-06-12Unverified0· sign in to hype

Xuan Cheng, Tianshu Xie, Xiaomin Wang, Jiali Deng, Minghui Liu, Ming Liu

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Regularization and data augmentation methods have been widely used and become increasingly indispensable in deep learning training. Researchers who devote themselves to this have considered various possibilities. But so far, there has been little discussion about regularizing outputs of the model. This paper begins with empirical observations that better performances are significantly associated with output distributions, that have smaller average values and variances. By audaciously assuming there is causality involved, we propose a novel regularization term, called Output Decay, that enforces the model to assign smaller and similar output values on each class. Though being counter-intuitive, such a small modification result in a remarkable improvement on performance. Extensive experiments demonstrate the wide applicability, versatility, and compatibility of Output Decay.

Tasks

Reproductions