SOTAVerified

A Framework For Contrastive Self-Supervised Learning And Designing A New Approach

2020-08-31Code Available4· sign in to hype

William Falcon, Kyunghyun Cho

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Contrastive self-supervised learning (CSL) is an approach to learn useful representations by solving a pretext task that selects and compares anchor, negative and positive (APN) features from an unlabeled dataset. We present a conceptual framework that characterizes CSL approaches in five aspects (1) data augmentation pipeline, (2) encoder selection, (3) representation extraction, (4) similarity measure, and (5) loss function. We analyze three leading CSL approaches--AMDIM, CPC, and SimCLR--, and show that despite different motivations, they are special cases under this framework. We show the utility of our framework by designing Yet Another DIM (YADIM) which achieves competitive results on CIFAR-10, STL-10 and ImageNet, and is more robust to the choice of encoder and the representation extraction strategy. To support ongoing CSL research, we release the PyTorch implementation of this conceptual framework along with standardized implementations of AMDIM, CPC (V2), SimCLR, BYOL, Moco (V2) and YADIM.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
STL-10AMDIMPercentage correct93.8Unverified
STL-10AMDIMPercentage correct94.5Unverified
STL-10YADIMPercentage correct92.15Unverified
STL-10CPC†Percentage correct78.36Unverified
STL-10Simulated FixationsPercentage correct61Unverified

Reproductions