SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 576600 of 6661 papers

TitleStatusHype
AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative AdversariesCode1
Cross-modal Contrastive Learning for Multimodal Fake News DetectionCode1
AD-CLIP: Adapting Domains in Prompt Space Using CLIPCode1
Cross-Domain Sentiment Classification with In-Domain Contrastive LearningCode1
Cross-Domain Graph Anomaly Detection via Anomaly-aware Contrastive AlignmentCode1
Cross-Domain Sentiment Classification with Contrastive Learning and Mutual Information MaximizationCode1
Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image SegmentationCode1
Cross-modal Contrastive Learning for Speech TranslationCode1
Cross-View Geolocalization and Disaster Mapping with Street-View and VHR Satellite Imagery: A Case Study of Hurricane IANCode1
ChatRetriever: Adapting Large Language Models for Generalized and Robust Conversational Dense RetrievalCode1
Adaptive Supervised PatchNCE Loss for Learning H&E-to-IHC Stain Translation with Inconsistent Groundtruth Image PairsCode1
CRIS: CLIP-Driven Referring Image SegmentationCode1
Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation OverlapCode1
cRedAnno+: Annotation Exploitation in Self-Explanatory Lung Nodule DiagnosisCode1
CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked AutoencodersCode1
CETN: Contrast-enhanced Through Network for CTR PredictionCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
Counterfactual contrastive learning: robust representations via causal image synthesisCode1
Adaptive Soft Contrastive LearningCode1
CP2: Copy-Paste Contrastive Pretraining for Semantic SegmentationCode1
CPLIP: Zero-Shot Learning for Histopathology with Comprehensive Vision-Language AlignmentCode1
A Multi-Task Semantic Decomposition Framework with Task-specific Pre-training for Few-Shot NERCode1
Best of Both Worlds: Multimodal Contrastive Learning with Tabular and Imaging DataCode1
Change-Aware Sampling and Contrastive Learning for Satellite ImagesCode1
CoT-BERT: Enhancing Unsupervised Sentence Representation through Chain-of-ThoughtCode1
Show:102550
← PrevPage 24 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified