SOTAVerified

Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results

2017-03-06NeurIPS 2017Code Available1· sign in to hype

Antti Tarvainen, Harri Valpola

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CIFAR-10, 250 LabelsMeanTeacherPercentage error47.32Unverified
CIFAR-10, 4000 LabelsMean TeacherPercentage error6.28Unverified
ImageNet - 10% labeled dataMean Teacher (ResNeXt-152)Top 5 Accuracy90.89Unverified
SVHN, 1000 labelsMean TeacherAccuracy96.05Unverified
SVHN, 250 LabelsMeanTeacherAccuracy93.55Unverified

Reproductions