SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 201225 of 6661 papers

TitleStatusHype
Crafting Better Contrastive Views for Siamese Representation LearningCode2
CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series ForecastingCode2
PiCO+: Contrastive Label Disambiguation for Robust Partial Label LearningCode2
PromptBERT: Improving BERT Sentence Embeddings with PromptsCode2
C2AM: Contrastive Learning of Class-Agnostic Activation Map for Weakly Supervised Object Localization and Semantic SegmentationCode2
Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for RecommendationCode2
Learning To Describe Player Form in The MLBCode2
Socially-Aware Self-Supervised Tri-Training for RecommendationCode2
SimCSE: Simple Contrastive Learning of Sentence EmbeddingsCode2
Intriguing Properties of Contrastive LossesCode2
Delving into Inter-Image Invariance for Unsupervised Visual RepresentationsCode2
Contrastive Learning for Unpaired Image-to-Image TranslationCode2
Unsupervised Learning of Visual Features by Contrasting Cluster AssignmentsCode2
Supervised Contrastive LearningCode2
A Simple Framework for Contrastive Learning of Visual RepresentationsCode2
RadiomicsRetrieval: A Customizable Framework for Medical Image Retrieval Using Radiomics FeaturesCode1
NLGCL: Naturally Existing Neighbor Layers Graph Contrastive Learning for RecommendationCode1
Vector Contrastive Learning For Pixel-Wise Pretraining In Medical VisionCode1
Refining music sample identification with a self-supervised graph neural networkCode1
TR2M: Transferring Monocular Relative Depth to Metric Depth with Language Descriptions and Scale-Oriented ContrastCode1
On the Similarities of Embeddings in Contrastive LearningCode1
Efficient Medical Vision-Language Alignment Through Adapting Masked Vision ModelsCode1
Multiple Object Stitching for Unsupervised Representation LearningCode1
C3S3: Complementary Competition and Contrastive Selection for Semi-Supervised Medical Image SegmentationCode1
A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and DisorderCode1
Show:102550
← PrevPage 9 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified