SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 13761400 of 6661 papers

TitleStatusHype
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum LearningCode1
Modulated Contrast for Versatile Image SynthesisCode1
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal AdapterCode1
Efficient fine-tuning methodology of text embedding models for information retrieval: contrastive learning penalty (clp)Code1
Contrastive Learning Reduces Hallucination in ConversationsCode1
Contrastive Neural Processes for Self-Supervised LearningCode1
AutoGCL: Automated Graph Contrastive Learning via Learnable View GeneratorsCode1
Eliciting Knowledge from Pretrained Language Models for Prototypical Prompt VerbalizerCode1
Efficient Medical Vision-Language Alignment Through Adapting Masked Vision ModelsCode1
Efficient Non-Local Contrastive Attention for Image Super-ResolutionCode1
Motion-aware Contrastive Video Representation Learning via Foreground-background MergingCode1
Bridge to Target Domain by Prototypical Contrastive Learning and Label Confusion: Re-explore Zero-Shot Learning for Slot FillingCode1
From t-SNE to UMAP with contrastive learningCode1
Efficient Zero-shot Event Extraction with Context-Definition AlignmentCode1
Contrastive Prototypical Network with Wasserstein Confidence PenaltyCode1
Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid ViewsCode1
Contrastive Learning with Adversarial Perturbations for Conditional Text GenerationCode1
AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive LearningCode1
Contrastive Learning with Bidirectional Transformers for Sequential RecommendationCode1
Improving Gloss-free Sign Language Translation by Reducing Representation DensityCode1
Multi-Grained Multimodal Interaction Network for Entity LinkingCode1
Contrastive Learning with Continuous Proxy Meta-Data for 3D MRI ClassificationCode1
Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Based LossesCode1
Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity RecognitionCode1
Improving Molecular Contrastive Learning via Faulty Negative Mitigation and Decomposed Fragment ContrastCode1
Show:102550
← PrevPage 56 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified