SOTAVerified

Consistent Assignment for Representation Learning

2021-02-26ICLR Workshop EBM 2021Unverified0· sign in to hype

Thalles Santos Silva, Adín Ramírez Rivera

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We introduce Consistent Assignment for Representation Learning (CARL). An unsupervised learning method to learn visual representations by combining contrastive learning with deep clustering. By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes that serve as energy anchors to enforce different views of a given image to be assigned to the same prototype. Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion, using gradient descent without the necessity of performing offline clustering or using non-differentiable algorithms to solve the cluster assignment problem. CARL achieves comparable results with current state-of-the-art methods in the CIFAR-10, -100, and STL10 datasets.

Tasks

Reproductions