SOTAVerified

DisturbLabel: Regularizing CNN on the Loss Layer

2016-04-30CVPR 2016Code Available0· sign in to hype

Lingxi Xie, Jingdong Wang, Zhen Wei, Meng Wang, Qi Tian

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

During a long period of time we are combating over-fitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.

Tasks

Reproductions