SOTAVerified

Self-Supervised Image Classification

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Papers

Showing 76100 of 110 papers

TitleStatusHype
Boosting Contrastive Self-Supervised Learning with False Negative CancellationCode1
Exploring Simple Siamese Representation LearningCode1
A comparative study of semi- and self-supervised semantic segmentation of biomedical microscopy data0
CompRess: Self-Supervised Learning by Compressing RepresentationsCode1
Representation Learning via Invariant Causal MechanismsCode1
Consensus Clustering With Unsupervised Representation Learning0
Generative Pretraining from PixelsCode2
Big Self-Supervised Models are Strong Semi-Supervised LearnersCode2
Unsupervised Learning of Visual Features by Contrasting Cluster AssignmentsCode2
Bootstrap your own latent: A new approach to self-supervised LearningCode1
What Makes for Good Views for Contrastive Learning?Code0
Prototypical Contrastive Learning of Unsupervised RepresentationsCode1
Improved Baselines with Momentum Contrastive LearningCode1
A Simple Framework for Contrastive Learning of Visual RepresentationsCode2
Self-Supervised Learning of Pretext-Invariant RepresentationsCode1
Self-labelling via simultaneous clustering and representation learningCode1
Momentum Contrast for Unsupervised Visual Representation LearningCode3
On Mutual Information Maximization for Representation LearningCode0
Large Scale Adversarial Representation LearningCode1
Contrastive Multiview CodingCode1
Learning Representations by Maximizing Mutual Information Across ViewsCode0
Putting An End to End-to-End: Gradient-Isolated Learning of RepresentationsCode0
Data-Efficient Image Recognition with Contrastive Predictive CodingCode0
Unsupervised Pre-Training of Image Features on Non-Curated DataCode0
Local Aggregation for Unsupervised Learning of Visual EmbeddingsCode0
Show:102550
← PrevPage 4 of 5Next →

No leaderboard results yet.