SOTAVerified

Temporal Ensembling for Semi-Supervised Learning

2016-10-07Code Available0· sign in to hype

Samuli Laine, Timo Aila

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
cifar-100, 10000 LabelsTemporal ensemblingPercentage error38.65Unverified
CIFAR-10, 250 LabelsⅡ-ModelPercentage error53.12Unverified
CIFAR-10, 4000 LabelsPi ModelPercentage error12.16Unverified

Reproductions