SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 901925 of 6661 papers

TitleStatusHype
CLIP-Event: Connecting Text and Images with Event StructuresCode1
Asymmetric Patch Sampling for Contrastive LearningCode1
Contrastive Cross-domain Recommendation in MatchingCode1
Conditioned and Composed Image Retrieval Combining and Partially Fine-Tuning CLIP-Based FeaturesCode1
CLIP-KD: An Empirical Study of CLIP Model DistillationCode1
CLIP-Lite: Information Efficient Visual Representation Learning with Language SupervisionCode1
CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive LearningCode1
DetCo: Unsupervised Contrastive Learning for Object DetectionCode1
A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural LanguageCode1
Consistent Explanations by Contrastive LearningCode1
Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text SummarizationCode1
DialogueCSE: Dialogue-based Contrastive Learning of Sentence EmbeddingsCode1
Detect Rumors in Microblog Posts for Low-Resource Domains via Adversarial Contrastive LearningCode1
DFIL: Deepfake Incremental Learning by Exploiting Domain-invariant Forgery CluesCode1
DICNet: Deep Instance-Level Contrastive Network for Double Incomplete Multi-View Multi-Label ClassificationCode1
Enhancing Modal Fusion by Alignment and Label Matching for Multimodal Emotion RecognitionCode1
Lambda: Learning Matchable Prior For Entity Alignment with Unlabeled Dangling CasesCode1
Expectation-Maximization Contrastive Learning for Compact Video-and-Language RepresentationsCode1
CLMLF:A Contrastive Learning and Multi-Layer Fusion Method for Multimodal Sentiment DetectionCode1
Diffusion-based Contrastive Learning for Sequential RecommendationCode1
CL-MVSNet: Unsupervised Multi-View Stereo with Dual-Level Contrastive LearningCode1
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual LearningCode1
Extending global-local view alignment for self-supervised learning with remote sensing imageryCode1
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and PatientsCode1
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence EncodersCode1
Show:102550
← PrevPage 37 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified