SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 48514875 of 6661 papers

TitleStatusHype
Words are all you need? Language as an approximation for human similarity judgments0
CO^3: Cooperative Unsupervised 3D Representation Learning for Autonomous DrivingCode1
ConFUDA: Contrastive Fewshot Unsupervised Domain Adaptation for Medical Image Segmentation0
Mixed Graph Contrastive Network for Semi-Supervised Node Classification0
Contrastive Graph Multimodal Model for Text Classification in Videos0
Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved NegativesCode1
Bootstrapping Semi-supervised Medical Image Segmentation with Anatomical-aware Contrastive DistillationCode1
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts0
Consensus Learning for Cooperative Multi-Agent Reinforcement Learning0
Semi-Supervised Learning for Mars Imagery Classification and Segmentation0
From t-SNE to UMAP with contrastive learningCode1
Integrating Prior Knowledge in Contrastive Learning with KernelCode0
Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group DiscriminationCode1
Egocentric Video-Language PretrainingCode2
3D-Augmented Contrastive Knowledge Distillation for Image-based Object Pose Estimation0
Understanding the Role of Nonlinearity in Training Dynamics of Contrastive Learning0
Hyperspherical Consistency RegularizationCode1
Prefix Conditioning Unifies Language and Label Supervision0
Hard Negative Sampling Strategies for Contrastive Representation Learning0
Cross-lingual and Multilingual CLIPCode2
Mitigating Dataset Artifacts in Natural Language Inference Through Automatic Contextual Data Augmentation and Learning Optimization0
Positive Unlabeled Contrastive Learning0
Multi-scale frequency separation network for image deblurring0
Strongly Augmented Contrastive ClusteringCode1
Augmentation Component Analysis: Modeling Similarity via the Augmentation OverlapsCode0
Show:102550
← PrevPage 195 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified