SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 15511575 of 6661 papers

TitleStatusHype
Dual-level Adaptive Incongruity-enhanced Model for Multimodal Sarcasm DetectionCode1
MedualTime: A Dual-Adapter Language Model for Medical Time Series-Text Multimodal LearningCode1
CCL: Continual Contrastive Learning for LiDAR Place RecognitionCode1
CoT-BERT: Enhancing Unsupervised Sentence Representation through Chain-of-ThoughtCode1
DyTed: Disentangled Representation Learning for Discrete-time Dynamic GraphCode1
Object Discovery via Contrastive Learning for Weakly Supervised Object DetectionCode1
Offline-Online Associated Camera-Aware Proxies for Unsupervised Person Re-identificationCode1
OmniSeg3D: Omniversal 3D Segmentation via Hierarchical Contrastive LearningCode1
One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training ModelsCode1
Contrastive Learning of Relative Position Regression for One-Shot Object Localization in 3D Medical ImagesCode1
COMPLETER: Incomplete Multi-view Clustering via Contrastive PredictionCode1
Driver Anomaly Detection: A Dataset and Contrastive Learning ApproachCode1
On Isotropy, Contextualization and Learning Dynamics of Contrastive-based Sentence Representation LearningCode1
On Learning to Summarize with Large Language Models as ReferencesCode1
On Narrative Information and the Distillation of StoriesCode1
On Representation Knowledge Distillation for Graph Neural NetworksCode1
CP2: Copy-Paste Contrastive Pretraining for Semantic SegmentationCode1
CDPAM: Contrastive learning for perceptual audio similarityCode1
Adversarial Self-Supervised Contrastive LearningCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
CPLIP: Zero-Shot Learning for Histopathology with Comprehensive Vision-Language AlignmentCode1
OntoProtein: Protein Pretraining With Gene Ontology EmbeddingCode1
DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual ScreeningCode1
OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion DataCode1
Cross-Architecture Self-supervised Video Representation LearningCode1
Show:102550
← PrevPage 63 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified