SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 10511075 of 6661 papers

TitleStatusHype
Contrastive Cross-domain Recommendation in MatchingCode1
ALSO: Automotive Lidar Self-supervision by Occupancy estimationCode1
FLIP: Cross-domain Face Anti-spoofing with Language GuidanceCode1
Contrastive Denoising Score for Text-guided Latent Diffusion Image EditingCode1
Contrastive Deep Nonnegative Matrix Factorization for Community DetectionCode1
Contrastive Identity-Aware Learning for Multi-Agent Value DecompositionCode1
FocusFace: Multi-task Contrastive Learning for Masked Face RecognitionCode1
Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation LearningCode1
Multi-modal vision-language model for generalizable annotation-free pathology localization and clinical diagnosisCode1
Contrastive ClusteringCode1
Contrastive Code Representation LearningCode1
Fine-grained Angular Contrastive Learning with Coarse LabelsCode1
Contrastive Collaborative Filtering for Cold-Start Item RecommendationCode1
Contrastive Bayesian Analysis for Deep Metric LearningCode1
Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive LearningCode1
Fine-grained Category Discovery under Coarse-grained supervision with Hierarchical Weighted Self-contrastive LearningCode1
Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited ModalitiesCode1
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Filtering, Distillation, and Hard Negatives for Vision-Language Pre-TrainingCode1
ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text EmbeddingsCode1
Few-shot Action Recognition with Prototype-centered Attentive LearningCode1
Few-Shot Intent Detection via Contrastive Pre-Training and Fine-TuningCode1
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional UnderstandingCode1
Finding Meaning in Points: Weakly Supervised Semantic Segmentation for Event CamerasCode1
Fine-grained Temporal Contrastive Learning for Weakly-supervised Temporal Action LocalizationCode1
Show:102550
← PrevPage 43 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified