SOTAVerified

Self-Supervised Learning by Estimating Twin Class Distributions

2021-10-14Code Available1· sign in to hype

Feng Wang, Tao Kong, Rufeng Zhang, Huaping Liu, Hang Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present TWIST, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. However, simply minimizing the divergence between augmentations will cause collapsed solutions, i.e., outputting the same class probability distribution for all images. In this case, no information about the input image is left. To solve this problem, we propose to maximize the mutual information between the input and the class predictions. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction for each sample assertive and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, TWIST can naturally avoid the collapsed solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. As a result, TWIST outperforms state-of-the-art methods on a wide range of tasks. Especially, TWIST performs surprisingly well on semi-supervised learning, achieving 61.2% top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone, surpassing previous best results by an absolute improvement of 6.2%. Codes and pre-trained models are given on: https://github.com/bytedance/TWIST

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Caltech-101TWIST (ResNet-50 )Top-1 Error Rate6.5Unverified
SUN397TWIST (ResNet-50)Accuracy67.4Unverified

Reproductions