SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 10011050 of 6661 papers

TitleStatusHype
Aligning Pretraining for Detection via Object-Level Contrastive LearningCode1
CoIn: Contrastive Instance Feature Mining for Outdoor 3D Object Detection with Very Limited AnnotationsCode1
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation DistillationCode1
Fine-grained Category Discovery under Coarse-grained supervision with Hierarchical Weighted Self-contrastive LearningCode1
Long-tail Augmented Graph Contrastive Learning for RecommendationCode1
Collaborating Domain-shared and Target-specific Feature Clustering for Cross-domain 3D Action RecognitionCode1
Finding Order in Chaos: A Novel Data Augmentation Method for Time Series in Contrastive LearningCode1
Energy-Based Contrastive Learning of Visual RepresentationsCode1
A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive LearningCode1
Enhanced Seq2Seq Autoencoder via Contrastive Learning for Abstractive Text SummarizationCode1
AASAE: Augmentation-Augmented Stochastic AutoencodersCode1
ReMeDi: Resources for Multi-domain, Multi-service, Medical DialoguesCode1
Contrastive Deep SupervisionCode1
English Contrastive Learning Can Learn Universal Cross-lingual Sentence EmbeddingsCode1
Fine-grained Angular Contrastive Learning with Coarse LabelsCode1
MAKE: Multi-Aspect Knowledge-Enhanced Vision-Language Pretraining for Zero-shot Dermatological AssessmentCode1
Fine-grained Temporal Contrastive Learning for Weakly-supervised Temporal Action LocalizationCode1
Enhancing Adversarial Contrastive Learning via Adversarial Invariant RegularizationCode1
CoMAE: Single Model Hybrid Pre-training on Small-Scale RGB-D DatasetsCode1
CoMatch: Semi-supervised Learning with Contrastive Graph RegularizationCode1
Enhancing Dysarthric Speech Recognition for Unseen Speakers via Prototype-Based AdaptationCode1
Margin Preserving Self-paced Contrastive Learning Towards Domain Adaptation for Medical Image SegmentationCode1
FLIP: Cross-domain Face Anti-spoofing with Language GuidanceCode1
Contrastive ClusteringCode1
A Message Passing Perspective on Learning Dynamics of Contrastive LearningCode1
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Few-Shot Intent Detection via Contrastive Pre-Training and Fine-TuningCode1
Contrastive Code Representation LearningCode1
Filtering, Distillation, and Hard Negatives for Vision-Language Pre-TrainingCode1
Contrastive Collaborative Filtering for Cold-Start Item RecommendationCode1
Balanced Contrastive Learning for Long-Tailed Visual RecognitionCode1
Contrastive Denoising Score for Text-guided Latent Diffusion Image EditingCode1
A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning for Any Atlas and DisorderCode1
Contrastive Bayesian Analysis for Deep Metric LearningCode1
Few-shot Action Recognition with Prototype-centered Attentive LearningCode1
Finding Meaning in Points: Weakly Supervised Semantic Segmentation for Event CamerasCode1
Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional UnderstandingCode1
Bag of Instances Aggregation Boosts Self-supervised DistillationCode1
FedIIC: Towards Robust Federated Learning for Class-Imbalanced Medical Image ClassificationCode1
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
FedHCDR: Federated Cross-Domain Recommendation with Hypergraph Signal DecouplingCode1
CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet ExtractionCode1
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-SeriesCode1
Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited ModalitiesCode1
ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design ModelsCode1
FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR PredictionCode1
FedACK: Federated Adversarial Contrastive Knowledge Distillation for Cross-Lingual and Cross-Model Social Bot DetectionCode1
FedX: Unsupervised Federated Learning with Cross Knowledge DistillationCode1
Adaptive Graph Contrastive Learning for RecommendationCode1
Show:102550
← PrevPage 21 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified