SOTAVerified

In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning

2021-01-15ICLR 2021Code Available1· sign in to hype

Mamshad Nayeem Rizve, Kevin Duarte, Yogesh S Rawat, Mubarak Shah

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The recent research in semi-supervised learning (SSL) is mostly dominated by consistency regularization based methods which achieve strong performance. However, they heavily rely on domain-specific data augmentations, which are not easy to generate for all data modalities. Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation. We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models; these predictions generate many incorrect pseudo-labels, leading to noisy training. We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process. Furthermore, UPS generalizes the pseudo-labeling process, allowing for the creation of negative pseudo-labels; these negative pseudo-labels can be used for multi-label classification as well as negative learning to improve the single-label classification. We achieve strong performance when compared to recent SSL methods on the CIFAR-10 and CIFAR-100 datasets. Also, we demonstrate the versatility of our method on the video dataset UCF-101 and the multi-label dataset Pascal VOC.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
cifar-100, 10000 LabelsUPS (CNN-13)Percentage error32Unverified
CIFAR-100, 4000 LabelsUPS (CNN-13)Accuracy59.23Unverified
CIFAR-10, 1000 LabelsUPS (CNN-13)Accuracy91.82Unverified
CIFAR-10, 4000 LabelsUPS (Shake-Shake)Percentage error4.86Unverified

Reproductions