SOTAVerified

Self-Supervised Image Classification

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Papers

Showing 51100 of 110 papers

TitleStatusHype
ResMLP: Feedforward networks for image classification with data-efficient trainingCode1
ReSSL: Relational Self-Supervised Learning with Weak AugmentationCode1
Self-labelling via simultaneous clustering and representation learningCode1
Self-Supervised Classification NetworkCode1
Self-Supervised Learning by Estimating Twin Class DistributionsCode1
Self-Supervised Learning of Pretext-Invariant RepresentationsCode1
Self-Supervised Learning with Swin TransformersCode1
Similarity Contrastive Estimation for Self-Supervised Soft Contrastive LearningCode1
SimMIM: A Simple Framework for Masked Image ModelingCode1
Solving Inefficiency of Self-supervised Representation LearningCode1
Multiplexed Immunofluorescence Brain Image Analysis Using Self-Supervised Dual-Loss Adaptive Masked AutoencoderCode1
Towards Sustainable Self-supervised LearningCode1
Unsupervised Feature Learning via Non-Parametric Instance DiscriminationCode1
Unsupervised Representation Learning by Predicting Image RotationsCode1
Unsupervised Visual Representation Learning by Online Constrained K-MeansCode1
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised LearningCode1
VNE: An Effective Method for Improving Deep Representation by Manipulating Eigenvalue DistributionCode1
Weakly Supervised Contrastive LearningCode1
Model-Aware Contrastive Learning: Towards Escaping the DilemmasCode0
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual RepresentationsCode0
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?Code0
Unsupervised Representation Learning by Balanced Self Attention MatchingCode0
Efficient Self-supervised Vision Transformers for Representation LearningCode0
Unsupervised Visual Representation Learning by Synchronous Momentum GroupingCode0
Unsupervised Pre-Training of Image Features on Non-Curated DataCode0
Local Aggregation for Unsupervised Learning of Visual EmbeddingsCode0
Revisiting Self-Supervised Visual Representation LearningCode0
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without SupervisionCode0
Masked Image Residual Learning for Scaling Deeper Vision TransformersCode0
BEiT v2: Masked Image Modeling with Vector-Quantized Visual TokenizersCode0
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning FrameworkCode0
Data-Efficient Image Recognition with Contrastive Predictive CodingCode0
All4One: Symbiotic Neighbour Contrastive Learning via Self-Attention and Redundancy ReductionCode0
Self-supervised Pretraining of Visual Features in the WildCode0
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillationCode0
Colorful Image ColorizationCode0
On Mutual Information Maximization for Representation LearningCode0
EVA: Exploring the Limits of Masked Visual Representation Learning at ScaleCode0
Representation Learning by Learning to CountCode0
Exploring Target Representations for Masked AutoencodersCode0
Putting An End to End-to-End: Gradient-Isolated Learning of RepresentationsCode0
Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel PredictionCode0
Learning Representations by Maximizing Mutual Information Across ViewsCode0
Improving Visual Representation Learning through Perceptual UnderstandingCode0
IPCL: Iterative Pseudo-Supervised Contrastive Learning to Improve Self-Supervised Feature RepresentationCode0
What Makes for Good Views for Contrastive Learning?Code0
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual RepresentationsCode0
Estimating Physical Information Consistency of Channel Data Augmentation for Remote Sensing Images0
Large-Scale Unsupervised Person Re-Identification with Contrastive Learning0
Consensus Clustering With Unsupervised Representation Learning0
Show:102550
← PrevPage 2 of 3Next →

No leaderboard results yet.