SOTAVerified

Self-Supervised Image Classification

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Papers

Showing 125 of 110 papers

TitleStatusHype
Vision Transformers Need RegistersCode6
DINOv2: Learning Robust Visual Features without SupervisionCode6
Multi-label Cluster Discrimination for Visual Representation LearningCode4
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNNCode4
ONE-PEACE: Exploring One General Representation Model Toward Unlimited ModalitiesCode3
Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked ModelingCode3
XCiT: Cross-Covariance Image TransformersCode3
Momentum Contrast for Unsupervised Visual Representation LearningCode3
Stabilize the Latent Space for Image Autoregressive Modeling: A Unified PerspectiveCode2
Unicom: Universal and Compact Representation Learning for Image RetrievalCode2
Masked Siamese Networks for Label-Efficient LearningCode2
Context Autoencoder for Self-Supervised Representation LearningCode2
BEiT: BERT Pre-Training of Image TransformersCode2
Generative Pretraining from PixelsCode2
Big Self-Supervised Models are Strong Semi-Supervised LearnersCode2
Unsupervised Learning of Visual Features by Contrasting Cluster AssignmentsCode2
A Simple Framework for Contrastive Learning of Visual RepresentationsCode2
MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained RepresentationsCode1
Masking meets Supervision: A Strong Learning AllianceCode1
Contrastive Tuning: A Little Help to Make Masked Autoencoders ForgetCode1
VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue DistributionCode1
Learning by Sorting: Self-supervised Learning with Group Ordering ConstraintsCode1
Towards Sustainable Self-supervised LearningCode1
Bootstrapped Masked Autoencoders for Vision BERT PretrainingCode1
Multiplexed Immunofluorescence Brain Image Analysis Using Self-Supervised Dual-Loss Adaptive Masked AutoencoderCode1
Show:102550
← PrevPage 1 of 5Next →

No leaderboard results yet.