SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 17011750 of 6661 papers

TitleStatusHype
Learning Representation for Clustering via Prototype Scattering and Positive SamplingCode1
Data Poisoning Attacks Against Multimodal EncodersCode1
Data Poisoning based Backdoor Attacks to Contrastive LearningCode1
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time SeriesCode1
COMPLETER: Incomplete Multi-view Clustering via Contrastive PredictionCode1
Self-Supervised Contrastive Learning for Singing VoicesCode1
Enhancing Sound Source Localization via False Negative EliminationCode1
Enriched Music Representations with Multiple Cross-modal Contrastive LearningCode1
Self-Supervised Graph Co-Training for Session-based RecommendationCode1
Self-supervised Graph Neural Networks without explicit negative samplingCode1
EASE: Entity-Aware Contrastive Learning of Sentence EmbeddingCode1
Self-Supervised Learning for Fine-Grained Image ClassificationCode1
Self-supervised Representation Learning Framework for Remote Physiological Measurement Using Spatiotemporal Augmentation LossCode1
DC-Seg: Disentangled Contrastive Learning for Brain Tumor Segmentation with Missing ModalitiesCode1
Enhancing Self-supervised Video Representation Learning via Multi-level Feature OptimizationCode1
SAM: Self-supervised Learning of Pixel-wise Anatomical Embeddings in Radiological ImagesCode1
Self-Supervised Longitudinal Neighbourhood EmbeddingCode1
Self-Supervised Pre-Training with Contrastive and Masked Autoencoder Methods for Dealing with Small Datasets in Deep Learning for Medical ImagingCode1
Self-Supervised Predictive Learning: A Negative-Free Method for Sound Source Localization in Visual ScenesCode1
PointVST: Self-Supervised Pre-training for 3D Point Clouds via View-Specific Point-to-Image TranslationCode1
Debiased Contrastive LearningCode1
Debiased Contrastive Learning for Sequential RecommendationCode1
CLCC: Contrastive Learning for Color ConstancyCode1
Self-supervised speech representation and contextual text embedding for match-mismatch classification with EEG recordingCode1
Debiased Contrastive Learning of Unsupervised Sentence RepresentationsCode1
Self-supervised Trajectory Representation Learning with Temporal Regularities and Travel SemanticsCode1
Enhancing Representation in Radiography-Reports Foundation Model: A Granular Alignment Algorithm Using Masked Contrastive LearningCode1
Enhancing Semantics in Multimodal Chain of Thought via Soft Negative SamplingCode1
Semantic-Aware Dual Contrastive Learning for Multi-label Image ClassificationCode1
Probabilistic Contrastive Learning for Domain AdaptationCode1
Entailment as Few-Shot LearnerCode1
Semi-Supervised Action Recognition with Temporal Contrastive LearningCode1
Enhancing Modal Fusion by Alignment and Label Matching for Multimodal Emotion RecognitionCode1
DeCLUTR: Deep Contrastive Learning for Unsupervised Textual RepresentationsCode1
Decoding Natural Images from EEG for Object RecognitionCode1
CLDG: Contrastive Learning on Dynamic GraphsCode1
Company-as-Tribe: Company Financial Risk Assessment on Tribe-Style Graph with Hierarchical Graph Neural NetworksCode1
Assisting Mathematical Formalization with A Learning-based Premise RetrieverCode1
Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory BankCode1
Semi-supervised Semantic Segmentation with Error Localization NetworkCode1
Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text MatchingCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
Community-Invariant Graph Contrastive LearningCode1
Decoupled Contrastive Learning for Long-Tailed RecognitionCode1
Decoupled Contrastive Multi-View Clustering with High-Order Random WalksCode1
Separated Contrastive Learning for Organ-at-Risk and Gross-Tumor-Volume Segmentation with Limited AnnotationCode1
AstroCLIP: A Cross-Modal Foundation Model for GalaxiesCode1
Enhancing Information Maximization with Distance-Aware Contrastive Learning for Source-Free Cross-Domain Few-Shot LearningCode1
CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-TuningCode1
Lambda: Learning Matchable Prior For Entity Alignment with Unlabeled Dangling CasesCode1
Show:102550
← PrevPage 35 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified