SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 18011850 of 6661 papers

TitleStatusHype
Leveraging Hidden Positives for Unsupervised Semantic SegmentationCode1
Degradation-Aware Self-Attention Based Transformer for Blind Image Super-ResolutionCode1
CLIP-Event: Connecting Text and Images with Event StructuresCode1
Leveraging Textual Anatomical Knowledge for Class-Imbalanced Semi-Supervised Multi-Organ SegmentationCode1
Asymmetric Patch Sampling for Contrastive LearningCode1
Uncertainty-aware Contrastive Distillation for Incremental Semantic SegmentationCode1
Factorized Contrastive Learning: Going Beyond Multi-view RedundancyCode1
Delving StyleGAN Inversion for Image Editing: A Foundation Latent Space ViewpointCode1
Like a Good Nearest Neighbor: Practical Content Moderation and Text ClassificationCode1
Democracy Does Matter: Comprehensive Feature Mining for Co-Salient Object DetectionCode1
CLIP-KD: An Empirical Study of CLIP Model DistillationCode1
Denoise and Contrast for Category Agnostic Shape CompletionCode1
Differentiable Data Augmentation for Contrastive Sentence Representation LearningCode1
Denoising-Aware Contrastive Learning for Noisy Time SeriesCode1
Denoising Diffusion Autoencoders are Unified Self-supervised LearnersCode1
CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive LearningCode1
LipLearner: Customizable Silent Speech Interactions on Mobile DevicesCode1
DenoSent: A Denoising Objective for Self-Supervised Sentence Representation LearningCode1
Modeling Text-Label Alignment for Hierarchical Text ClassificationCode1
LIV: Language-Image Representations and Rewards for Robotic ControlCode1
Dog nose print matching with dual global descriptor based on Contrastive LearningCode1
LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-TuningCode1
Modeling Two-Way Selection Preference for Person-Job FitCode1
DEnsity: Open-domain Dialogue Evaluation Metric using Density EstimationCode1
Long-tail Augmented Graph Contrastive Learning for RecommendationCode1
Long-Tailed Recognition by Mutual Information Maximization between Latent Features and Ground-Truth LabelsCode1
Low-Rank Similarity Mining for Multimodal Dataset DistillationCode1
Unifying Graph Contrastive Learning with Flexible Contextual ScopesCode1
DialogueCSE: Dialogue-based Contrastive Learning of Sentence EmbeddingsCode1
Low-rank Prompt Interaction for Continual Vision-Language RetrievalCode1
DICNet: Deep Instance-Level Contrastive Network for Double Incomplete Multi-View Multi-Label ClassificationCode1
Direct Preference-based Policy Optimization without Reward ModelingCode1
Diagnosing and Rectifying Vision Models using LanguageCode1
A Unified Framework for Microscopy Defocus Deblur with Multi-Pyramid Transformer and Contrastive LearningCode1
DiffSim: Taming Diffusion Models for Evaluating Visual SimilarityCode1
Disentangled Contrastive Collaborative FilteringCode1
ReMeDi: Resources for Multi-domain, Multi-service, Medical DialoguesCode1
UniSAR: Modeling User Transition Behaviors between Search and RecommendationCode1
Modeling User Fatigue for Sequential RecommendationCode1
MABEL: Attenuating Gender Bias using Textual Entailment DataCode1
Unlocking the diagnostic potential of electrocardiograms through information transfer from cardiac magnetic resonance imagingCode1
Unlocking the Potential of Unlabeled Data in Semi-Supervised Domain GeneralizationCode1
Multi-label Sequential Sentence Classification via Large Language ModelCode1
CL-MVSNet: Unsupervised Multi-View Stereo with Dual-Level Contrastive LearningCode1
Detect Rumors in Microblog Posts for Low-Resource Domains via Adversarial Contrastive LearningCode1
MA-GCL: Model Augmentation Tricks for Graph Contrastive LearningCode1
Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label NoiseCode1
Making Your First Choice: To Address Cold Start Problem in Vision Active LearningCode1
DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery CluesCode1
MVCNet: Multi-View Contrastive Network for Motor Imagery ClassificationCode1
Show:102550
← PrevPage 37 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified