SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 38513875 of 6661 papers

TitleStatusHype
A Global and Patch-wise Contrastive Loss for Accurate Automated Exudate DetectionCode0
Few-Shot Point Cloud Semantic Segmentation via Contrastive Self-Supervision and Multi-Resolution Attention0
Unpaired Translation from Semantic Label Maps to Images by Leveraging Domain-Specific Simulations0
A General-Purpose Transferable Predictor for Neural Architecture Search0
Multi-Modal Self-Supervised Learning for RecommendationCode2
Mask-guided BERT for Few Shot Text Classification0
Few-shot Detection of Anomalies in Industrial Cyber-Physical System via Prototypical Network and Contrastive Learning0
DrasCLR: A Self-supervised Framework of Learning Disease-related and Anatomy-specific Representation for 3D Medical Images0
Generalization Bounds for Adversarial Contrastive Learning0
Heterogeneous Social Event Detection via Hyperbolic Graph RepresentationsCode0
Pseudo Contrastive Learning for Graph-based Semi-supervised Learning0
Supervised Contrastive Learning and Feature Fusion for Improved Kinship Verification0
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE0
Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the LeastCode1
EnfoMax: Domain Entropy and Mutual Information Maximization for Domain Generalized Face Anti-spoofing0
Building Shortcuts between Distant Nodes with Biaffine Mapping for Graph Convolutional Networks0
Like a Good Nearest Neighbor: Practical Content Moderation and Text ClassificationCode1
Self-supervised Action Representation Learning from Partial Spatio-Temporal Skeleton SequencesCode1
Bridge the Gap between Language models and Tabular Understanding0
LightGCL: Simple Yet Effective Graph Contrastive Learning for RecommendationCode2
LabelPrompt: Effective Prompt-based Learning for Relation Classification0
Dialogue State Distillation Network with Inter-slot Contrastive Learning for Dialogue State Tracking0
CluCDD:Contrastive Dialogue Disentanglement via ClusteringCode1
Audio-Visual Contrastive Learning with Temporal Self-Supervision0
How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval0
Show:102550
← PrevPage 155 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified