SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 27512800 of 6661 papers

TitleStatusHype
Spatio-Temporal Meta Contrastive LearningCode1
Image Prior and Posterior Conditional Probability Representation for Efficient Damage Assessment0
Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identificationCode1
Boosting Multi-Speaker Expressive Speech Synthesis with Semi-supervised Contrastive Learning0
PSP: Pre-Training and Structure Prompt Tuning for Graph Neural NetworksCode0
SSLCL: An Efficient Model-Agnostic Supervised Contrastive Learning Framework for Emotion Recognition in ConversationsCode1
IntenDD: A Unified Contrastive Learning Approach for Intent Detection and Discovery0
Proposal-Contrastive Pretraining for Object Detection from Fewer Data0
Learning Robust Deep Visual Representations from EEG Brain RecordingsCode1
Model-enhanced Contrastive Reinforcement Learning for Sequential Recommendation0
Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked Auto-EncoderCode1
DyExplainer: Explainable Dynamic Graph Neural Networks0
Unpaired MRI Super Resolution with Contrastive Learning0
Debiasing, calibrating, and improving Semi-supervised Learning performance via simple Ensemble Projector0
Length is a Curse and a Blessing for Document-level SemanticsCode0
Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning0
MyriadAL: Active Few Shot Learning for HistopathologyCode0
I^2MD: 3D Action Representation Learning with Inter- and Intra-modal Mutual Distillation0
Contrastive Learning-based Sentence Encoders Implicitly Weight Informative WordsCode0
CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet ExtractionCode1
Topology-aware Debiased Self-supervised Graph Learning for RecommendationCode0
A Diffusion Weighted Graph Framework for New Intent DiscoveryCode0
Joint Searching and Grounding: Multi-Granularity Video Content RetrievalCode0
Unveiling the Power of CLIP in Unsupervised Visible-Infrared Person Re-IdentificationCode1
Remote Heart Rate Monitoring in Smart Environments from Videos with Self-supervised Pre-training0
MSFormer: A Skeleton-multiview Fusion Method For Tooth Instance Segmentation0
SAMCLR: Contrastive pre-training on complex scenes using SAM for view sampling0
CalibrationPhys: Self-supervised Video-based Heart and Respiratory Rate Measurements by Calibrating Between Multiple Cameras0
GRENADE: Graph-Centric Language Model for Self-Supervised Representation Learning on Text-Attributed GraphsCode1
GeoLM: Empowering Language Models for Geospatially Grounded Language UnderstandingCode1
Graph Ranking Contrastive Learning: A Extremely Simple yet Efficient Method0
Intent Contrastive Learning with Cross Subsequences for Sequential RecommendationCode1
TATA: Stance Detection via Topic-Agnostic and Topic-Aware EmbeddingsCode0
CLMSM: A Multi-Task Learning Framework for Pre-training on Procedural TextCode0
HEProto: A Hierarchical Enhancing ProtoNet based on Multi-Task Learning for Few-shot Named Entity RecognitionCode1
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-SeriesCode1
Bi-discriminator Domain Adversarial Neural Networks with Class-Level Gradient AlignmentCode0
Meta-optimized Joint Generative and Contrastive Learning for Sequential Recommendation0
Spectral-Aware Augmentation for Enhanced Graph Representation Learning0
Multi-level Contrastive Learning for Script-based Character UnderstandingCode0
Coarse-to-Fine Dual Encoders are Better Frame Identification LearnersCode0
DistillCSE: Distilled Contrastive Learning for Sentence EmbeddingsCode0
Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens0
SILC: Improving Vision Language Pretraining with Self-Distillation0
Enhancing drug and cell line representations via contrastive learning for improved anti-cancer drug prioritization0
MTS-LOF: Medical Time-Series Representation Learning via Occlusion-Invariant FeaturesCode0
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal AdapterCode1
WeedCLR: Weed Contrastive Learning through Visual Representations with Class-Optimized Loss in Long-Tailed Datasets0
Contrastive Learning for Inference in DialogueCode0
Exploiting Low-confidence Pseudo-labels for Source-free Object Detection0
Show:102550
← PrevPage 56 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified