SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 10261050 of 6661 papers

TitleStatusHype
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Few-Shot Intent Detection via Contrastive Pre-Training and Fine-TuningCode1
Contrastive Code Representation LearningCode1
Filtering, Distillation, and Hard Negatives for Vision-Language Pre-TrainingCode1
Contrastive Collaborative Filtering for Cold-Start Item RecommendationCode1
Balanced Contrastive Learning for Long-Tailed Visual RecognitionCode1
Contrastive Denoising Score for Text-guided Latent Diffusion Image EditingCode1
A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and DisorderCode1
Contrastive Bayesian Analysis for Deep Metric LearningCode1
Few-shot Action Recognition with Prototype-centered Attentive LearningCode1
Finding Meaning in Points: Weakly Supervised Semantic Segmentation for Event CamerasCode1
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional UnderstandingCode1
Bag of Instances Aggregation Boosts Self-supervised DistillationCode1
FedIIC: Towards Robust Federated Learning for Class-Imbalanced Medical Image ClassificationCode1
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
FedHCDR: Federated Cross-Domain Recommendation with Hypergraph Signal DecouplingCode1
CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet ExtractionCode1
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-SeriesCode1
Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited ModalitiesCode1
ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design ModelsCode1
FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR PredictionCode1
FedACK: Federated Adversarial Contrastive Knowledge Distillation for Cross-Lingual and Cross-Model Social Bot DetectionCode1
FedX: Unsupervised Federated Learning with Cross Knowledge DistillationCode1
Adaptive Graph Contrastive Learning for RecommendationCode1
Show:102550
← PrevPage 42 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified