SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 32013225 of 6661 papers

TitleStatusHype
Audio-Visual Contrastive Learning with Temporal Self-Supervision0
DN-CL: Deep Symbolic Regression against Noise via Contrastive Learning0
DMT: Comprehensive Distillation with Multiple Self-supervised Teachers0
DMMG: Dual Min-Max Games for Self-Supervised Skeleton-Based Action Recognition0
Self-supervised Contrastive Learning for Audio-Visual Action Recognition0
Align and Aggregate: Compositional Reasoning with Video Alignment and Answer Aggregation for Video Question-Answering0
ActiveMatch: End-to-end Semi-supervised Active Representation Learning0
Diving into Unified Data-Model Sparsity for Class-Imbalanced Graph Representation Learning0
CoCGAN: Contrastive Learning for Adversarial Category Text Generation0
Divide and Contrast: Self-supervised Learning from Uncurated Data0
Distribution Shift Matters for Knowledge Distillation with Webly Collected Images0
Coarse-to-Fine Contrastive Learning on Graphs0
Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality0
Decentralized Unsupervised Learning of Visual Representations0
Distributed Contrastive Learning for Medical Image Segmentation0
Distortion-Disentangled Contrastive Learning0
Audio Contrastive based Fine-tuning0
Distilling Structured Knowledge for Text-Based Relational Reasoning0
CO3: Low-resource Contrastive Co-training for Generative Conversational Query Rewrite0
A two-steps approach to improve the performance of Android malware detectors0
AlexU-AIC at Arabic Hate Speech 2022: Contrast to Classify0
Distilling Localization for Self-Supervised Representation Learning0
CO2Sum:Contrastive Learning for Factual-Consistent Abstractive Summarization0
Distill CLIP (DCLIP): Enhancing Image-Text Retrieval via Cross-Modal Transformer Distillation0
A Two-Stage Prediction-Aware Contrastive Learning Framework for Multi-Intent NLU0
Show:102550
← PrevPage 129 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified