SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 43764400 of 6661 papers

TitleStatusHype
Adaptive Contrastive Learning with Dynamic Correlation for Multi-Phase Organ SegmentationCode0
Semantic Segmentation with Active Semi-Supervised Representation Learning0
How Mask Matters: Towards Theoretical Understandings of Masked AutoencodersCode1
Augmentation-Free Graph Contrastive Learning of Invariant-Discriminative RepresentationsCode3
Improving Radiology Summarization with Radiograph and Anatomy Prompts0
Augmented Dual-Contrastive Aggregation Learning for Unsupervised Visible-Infrared Person Re-IdentificationCode1
Instance Segmentation with Cross-Modal Consistency0
MICO: A Multi-alternative Contrastive Learning Framework for Commonsense Knowledge RepresentationCode1
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning0
Blind Super-Resolution for Remote Sensing Images via Conditional Stochastic Normalizing Flows0
Fine-grained Category Discovery under Coarse-grained supervision with Hierarchical Weighted Self-contrastive LearningCode1
Invariance-adapted decomposition and Lasso-type contrastive learning0
TractoSCR: A Novel Supervised Contrastive Regression Framework for Prediction of Neurocognitive Measures Using Multi-Site Harmonized Diffusion MRI Tractography0
LEAVES: Learning Views for Time-Series Data in Contrastive Learning0
Low-resource Neural Machine Translation with Cross-modal AlignmentCode1
Closed-book Question Generation via Contrastive LearningCode0
RaP: Redundancy-aware Video-language Pre-training for Text-Video RetrievalCode0
Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in RLCode1
Language Agnostic Multilingual Information Retrieval with Contrastive LearningCode0
Prepended Domain Transformer: Heterogeneous Face Recognition without Bells and WhistlesCode0
QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple Object TrackingCode2
Multi-Granularity Cross-modal Alignment for Generalized Medical Visual Representation LearningCode1
Self-supervised video pretraining yields robust and more human-aligned visual representations0
Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive LearningCode2
Self-Attention Message Passing for Contrastive Few-Shot LearningCode1
Show:102550
← PrevPage 176 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified