SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 50515075 of 6661 papers

TitleStatusHype
On Distribution Shift in Learning-based Bug DetectorsCode1
FedCL: Federated Contrastive Learning for Privacy-Preserving Recommendation0
Generative or Contrastive? Phrase Reconstruction for Better Sentence Representation Learning0
Video Moment Retrieval from Text Queries via Single Frame AnnotationCode1
Utilizing unsupervised learning to improve sward content prediction and herbage mass estimation0
Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender SystemCode1
Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled DataCode1
Attributed Graph Clustering with Dual Redundancy ReductionCode1
Gated Multimodal Fusion with Contrastive Learning for Turn-taking Prediction in Human-robot Dialogue0
Self Supervised Lesion Recognition For Breast Ultrasound Diagnosis0
Detect Rumors in Microblog Posts for Low-Resource Domains via Adversarial Contrastive LearningCode1
Caption Feature Space Regularization for Audio CaptioningCode0
GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language UnderstandingCode0
Unsupervised Contrastive Domain Adaptation for Semantic Segmentation0
Contrastive Learning with Hard Negative Entities for Entity Set ExpansionCode1
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment AnalysisCode1
Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive LearningCode1
DialAug: Mixing up Dialogue Contexts in Contrastive Learning for Robust Conversational Modeling0
CILDA: Contrastive Data Augmentation using Intermediate Layer Knowledge Distillation0
COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for Cross-Modal Retrieval0
Improving Cross-Modal Understanding in Visual Dialog via Contrastive Learning0
CroCo: Cross-Modal Contrastive learning for localization of Earth Observation dataCode0
Contrastive Learning for Image Registration in Visual Teach and Repeat NavigationCode0
Learning to Revise References for Faithful SummarizationCode1
Efficient Cluster-Based k-Nearest-Neighbor Machine TranslationCode0
Show:102550
← PrevPage 203 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified