SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 23012325 of 6661 papers

TitleStatusHype
MarsEclipse at SemEval-2023 Task 3: Multi-Lingual and Multi-Label Framing Detection with Contrastive LearningCode0
Adversarial Learning Data Augmentation for Graph Contrastive Learning in RecommendationCode0
Caption Feature Space Regularization for Audio CaptioningCode0
Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text ClassificationCode0
Manifold Contrastive Learning with Variational Lie Group OperatorsCode0
Adversarial Graph Contrastive Learning with Information RegularizationCode0
Making Pre-trained Language Models Better Continual Few-Shot Relation ExtractorsCode0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningCode0
Contrastive Visual-Linguistic PretrainingCode0
Making the Most of Text Semantics to Improve Biomedical Vision--Language ProcessingCode0
ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography ClassificationCode0
MA-AVT: Modality Alignment for Parameter-Efficient Audio-Visual TransformersCode0
Single-Pass Contrastive Learning Can Work for Both Homophilic and Heterophilic GraphCode0
Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?Code0
M3ANet: Multi-scale and Multi-Modal Alignment Network for Brain-Assisted Target Speaker ExtractionCode0
Contrastive Variational Autoencoder Enhances Salient FeaturesCode0
UoR-NCL at SemEval-2025 Task 1: Using Generative LLMs and CLIP Models for Multilingual Multimodal Idiomaticity RepresentationCode0
Machine Unlearning in Hyperbolic vs. Euclidean Multimodal Contrastive Learning: Adapting Alignment Calibration to MERUCode0
Can Machines Resonate with Humans? Evaluating the Emotional and Empathic Comprehension of LMsCode0
Contrastive Transformer Learning with Proximity Data Generation for Text-Based Person SearchCode0
A Contrastive Learning Scheme with Transformer Innate PatchesCode0
Contrastive Training of Complex-Valued Autoencoders for Object DiscoveryCode0
Approximate Bijective Correspondence for isolating factors of variationCode0
M3: A Multi-Task Mixed-Objective Learning Framework for Open-Domain Multi-Hop Dense Sentence RetrievalCode0
Looking Beyond Corners: Contrastive Learning of Visual Representations for Keypoint Detection and Description ExtractionCode0
Show:102550
← PrevPage 93 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified