SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 22012225 of 6661 papers

TitleStatusHype
Mask-informed Deep Contrastive Incomplete Multi-view ClusteringCode0
Cross-Task Consistency Learning Framework for Multi-Task LearningCode0
Masking Improves Contrastive Self-Supervised Learning for ConvNets, and Saliency Tells You WhereCode0
Medication Recommendation via Dual Molecular Modalities and Multi-Step EnhancementCode0
CHMATCH: Contrastive Hierarchical Matching and Robust Adaptive Threshold Boosted Semi-Supervised LearningCode0
A Self-Supervised Model for Multi-modal Stroke Risk PredictionCode0
AFiRe: Anatomy-Driven Self-Supervised Learning for Fine-Grained Representation in Radiographic ImagesCode0
Masked Collaborative Contrast for Weakly Supervised Semantic SegmentationCode0
Masked Student Dataset of ExpressionsCode0
Cross-Model Cross-Stream Learning for Self-Supervised Human Action RecognitionCode0
Cross-Modal Self-Supervised Learning with Effective Contrastive Units for LiDAR Point CloudsCode0
Affinity Uncertainty-based Hard Negative Mining in Graph Contrastive LearningCode0
Cross-modal Contrastive Learning with Asymmetric Co-attention Network for Video Moment RetrievalCode0
Chasing Fairness in Graphs: A GNN Architecture PerspectiveCode0
MAPS: Motivation-Aware Personalized Search via LLM-Driven Consultation AlignmentCode0
Cross-Modal Contrastive Learning for Robust Reasoning in VQACode0
Crossmodal clustered contrastive learning: Grounding of spoken language to gestureCode0
Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text ClassificationCode0
MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive LearningCode0
CrossMoCo: Multi-modal Momentum Contrastive Learning for Point CloudCode0
Manifold Contrastive Learning with Variational Lie Group OperatorsCode0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningCode0
Channel-aware Contrastive Conditional Diffusion for Multivariate Probabilistic Time Series ForecastingCode0
Making the Most of Text Semantics to Improve Biomedical Vision--Language ProcessingCode0
Cross-Lingual Contrastive Learning for Fine-Grained Entity Typing for Low-Resource LanguagesCode0
Show:102550
← PrevPage 89 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified