SOTAVerified

Prototypical Contrastive Learning of Unsupervised Representations

2020-05-11ICLR 2021Code Available1· sign in to hype

Junnan Li, Pan Zhou, Caiming Xiong, Steven C. H. Hoi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that addresses the fundamental limitations of instance-wise contrastive learning. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it implicitly encodes semantic structures of the data into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at https://github.com/salesforce/PCL.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ImageNet - 1% labeled dataPCL (ResNet-50)Top 5 Accuracy75.6Unverified

Reproductions