SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 19011925 of 6661 papers

TitleStatusHype
Multi-Margin Cosine Loss: Proposal and Application in Recommender SystemsCode0
Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional AdaptationCode0
ActNetFormer: Transformer-ResNet Hybrid Method for Semi-Supervised Action Recognition in VideosCode0
Aligning Step-by-Step Instructional Diagrams to Video DemonstrationsCode0
Multi-level Asymmetric Contrastive Learning for Volumetric Medical Image Segmentation Pre-trainingCode0
Multi-Level Contrastive Learning for Dense Prediction TaskCode0
Augment with Care: Contrastive Learning for Combinatorial ProblemsCode0
Multi-Label Contrastive Learning for Abstract Visual ReasoningCode0
Multi-level Contrastive Learning for Script-based Character UnderstandingCode0
Multi-Graph Co-Training for Capturing User Intent in Session-based RecommendationCode0
DEDUCE: Multi-head attention decoupled contrastive learning to discover cancer subtypes based on multi-omics dataCode0
Aligning Motion-Blurred Images Using Contrastive Learning on Overcomplete PixelsCode0
Multi-Label Contrastive Learning : A Comprehensive StudyCode0
Multi-level Cross-modal Feature Alignment via Contrastive Learning towards Zero-shot Classification of Remote Sensing Image ScenesCode0
CODER: An efficient framework for improving retrieval through COntextual Document Embedding RerankingCode0
MuDAF: Long-Context Multi-Document Attention Focusing through Contrastive Learning on Attention HeadsCode0
MTS-LOF: Medical Time-Series Representation Learning via Occlusion-Invariant FeaturesCode0
Multi-axis Attentive Prediction for Sparse EventData: An Application to Crime PredictionCode0
COCO-OLAC: A Benchmark for Occluded Panoptic Segmentation and Image UnderstandingCode0
MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain Adaptation for Breast MRI Segmentation in Small DatasetsCode0
MSA-UNet3+: Multi-Scale Attention UNet3+ with New Supervised Prototypical Contrastive Loss for Coronary DSA Image SegmentationCode0
MSVQ: Self-Supervised Learning with Multiple Sample Views and QueuesCode0
AAG: Self-Supervised Representation Learning by Auxiliary Augmentation with GNT-Xent LossCode0
M(otion)-mode Based Prediction of Ejection Fraction using EchocardiogramsCode0
CochCeps-Augment: A Novel Self-Supervised Contrastive Learning Using Cochlear Cepstrum-based Masking for Speech Emotion RecognitionCode0
Show:102550
← PrevPage 77 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified