SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 23012350 of 6661 papers

TitleStatusHype
Adversarial Learning Data Augmentation for Graph Contrastive Learning in RecommendationCode0
Caption Feature Space Regularization for Audio CaptioningCode0
MassNet: A Deep Learning Approach for Body Weight Extraction from A Single Pressure ImageCode0
Adversarial Graph Contrastive Learning with Information RegularizationCode0
Masking Improves Contrastive Self-Supervised Learning for ConvNets, and Saliency Tells You WhereCode0
Masked Student Dataset of ExpressionsCode0
Mask-Guided Contrastive Attention Model for Person Re-IdentificationCode0
Contrastive Visual-Linguistic PretrainingCode0
Mask-informed Deep Contrastive Incomplete Multi-view ClusteringCode0
Single-Pass Contrastive Learning Can Work for Both Homophilic and Heterophilic GraphCode0
Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?Code0
Contrastive Variational Autoencoder Enhances Salient FeaturesCode0
UoR-NCL at SemEval-2025 Task 1: Using Generative LLMs and CLIP Models for Multilingual Multimodal Idiomaticity RepresentationCode0
Masked Collaborative Contrast for Weakly Supervised Semantic SegmentationCode0
Can Machines Resonate with Humans? Evaluating the Emotional and Empathic Comprehension of LMsCode0
Contrastive Transformer Learning with Proximity Data Generation for Text-Based Person SearchCode0
A Contrastive Learning Scheme with Transformer Innate PatchesCode0
Contrastive Training of Complex-Valued Autoencoders for Object DiscoveryCode0
Approximate Bijective Correspondence for isolating factors of variationCode0
MarsEclipse at SemEval-2023 Task 3: Multi-Lingual and Multi-Label Framing Detection with Contrastive LearningCode0
MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive LearningCode0
ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography ClassificationCode0
Camera-Tracklet-Aware Contrastive Learning for Unsupervised Vehicle Re-IdentificationCode0
Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text ClassificationCode0
MAPS: Motivation-Aware Personalized Search via LLM-Driven Consultation AlignmentCode0
Camera Alignment and Weighted Contrastive Learning for Domain Adaptation in Video Person ReIDCode0
Contrastive Self-Supervised Learning for Wireless Power ControlCode0
Contrastive Self-Supervised Learning Based Approach for Patient Similarity: A Case Study on Atrial Fibrillation Detection from PPG SignalCode0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningCode0
Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine LearningCode0
Making the Most of Text Semantics to Improve Biomedical Vision--Language ProcessingCode0
Manifold Contrastive Learning with Variational Lie Group OperatorsCode0
MA-AVT: Modality Alignment for Parameter-Efficient Audio-Visual TransformersCode0
Calibration-based Dual Prototypical Contrastive Learning Approach for Domain Generalization Semantic SegmentationCode0
M3ANet: Multi-scale and Multi-Modal Alignment Network for Brain-Assisted Target Speaker ExtractionCode0
Machine Unlearning in Hyperbolic vs. Euclidean Multimodal Contrastive Learning: Adapting Alignment Calibration to MERUCode0
Calibrating Multi-modal Representations: A Pursuit of Group Robustness without AnnotationsCode0
M3: A Multi-Task Mixed-Objective Learning Framework for Open-Domain Multi-Hop Dense Sentence RetrievalCode0
Contrastive Representation for Interactive RecommendationCode0
CADet: Fully Self-Supervised Out-Of-Distribution Detection With Contrastive LearningCode0
Multi-task Pre-training Language Model for Semantic Network CompletionCode0
LostPaw: Finding Lost Pets using a Contrastive Learning-based Transformer with Visual InputCode0
A Closer Look at Invariances in Self-supervised Pre-training for 3D VisionCode0
Looking Beyond Corners: Contrastive Learning of Visual Representations for Keypoint Detection and Description ExtractionCode0
Low-confidence Samples Matter for Domain AdaptationCode0
A Novel Contrastive Learning Method for Clickbait Detection on RoCliCo: A Romanian Clickbait Corpus of News ArticlesCode0
Augmentation Component Analysis: Modeling Similarity via the Augmentation OverlapsCode0
Low-Contrast-Enhanced Contrastive Learning for Semi-Supervised Endoscopic Image SegmentationCode0
LogiCoL: Logically-Informed Contrastive Learning for Set-based Dense RetrievalCode0
Local Aggregation for Unsupervised Learning of Visual EmbeddingsCode0
Show:102550
← PrevPage 47 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified