SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 66016625 of 6661 papers

TitleStatusHype
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from PixelsCode1
Audio-Visual Instance Discrimination with Cross-Modal AgreementCode1
Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive LearningCode1
Supervised Contrastive LearningCode2
Distilling Localization for Self-Supervised Representation Learning0
CURL: Contrastive Unsupervised Representations for Reinforcement LearningCode1
Clustering based Contrastive Learning for Improving Face Representations0
Edge Guided GANs with Contrastive Learning for Semantic Image SynthesisCode1
Semi-supervised Contrastive Learning Using Partial Label Information0
On Compositions of Transformations in Contrastive Self-Supervised LearningCode1
Improved Baselines with Momentum Contrastive LearningCode1
Contrastive estimation reveals topic posterior information to linear models0
CoLES: Contrastive Learning for Event Sequences with Self-SupervisionCode1
Convergence of End-to-End Training in Deep Unsupervised Contrastive Learning0
A Simple Framework for Contrastive Learning of Visual RepresentationsCode2
On Contrastive Learning for Likelihood-free InferenceCode1
CURL: Contrastive Unsupervised Representation Learning for Reinforcement LearningCode1
Understanding Contrastive Representation Learning through Geometry on the HypersphereCode1
Self-Supervised Learning of Pretext-Invariant RepresentationsCode1
Contrastive Learning of Structured World ModelsCode0
Self-labelling via simultaneous clustering and representation learningCode1
Momentum Contrast for Unsupervised Visual Representation LearningCode3
Contrastive Multi-document Question GenerationCode0
Robust contrastive learning and nonlinear ICA in the presence of outliers0
Contrastive Representation DistillationCode1
Show:102550
← PrevPage 265 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified