SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 601650 of 6661 papers

TitleStatusHype
Contrastive Deep Nonnegative Matrix Factorization for Community DetectionCode1
FaMeSumm: Investigating and Improving Faithfulness of Medical SummarizationCode1
Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-trainingCode1
Cross-Modal Information-Guided Network using Contrastive Learning for Point Cloud RegistrationCode1
CROMA: Remote Sensing Representations with Contrastive Radar-Optical Masked AutoencodersCode1
TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining and Object Detection in RainCode1
REBAR: Retrieval-Based Reconstruction for Time-series Contrastive LearningCode1
BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable BasisCode1
SimMMDG: A Simple and Effective Framework for Multi-modal Domain GeneralizationCode1
FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR PredictionCode1
FOCAL: Contrastive Learning for Multimodal Time-Series Sensing Signals in Factorized Orthogonal Latent SpaceCode1
Adversarial Examples Are Not Real FeaturesCode1
Simple and Asymmetric Graph Contrastive Learning without AugmentationsCode1
BirdSAT: Cross-View Contrastive Masked Autoencoders for Bird Species Classification and MappingCode1
Empowering Collaborative Filtering with Principled Adversarial Contrastive LossCode1
Leveraging Multimodal Features and Item-level User Feedback for Bundle ConstructionCode1
Spatio-Temporal Meta Contrastive LearningCode1
Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identificationCode1
SSLCL: An Efficient Model-Agnostic Supervised Contrastive Learning Framework for Emotion Recognition in ConversationsCode1
Modality-Agnostic Self-Supervised Learning with Meta-Learned Masked Auto-EncoderCode1
Learning Robust Deep Visual Representations from EEG Brain RecordingsCode1
CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet ExtractionCode1
GeoLM: Empowering Language Models for Geospatially Grounded Language UnderstandingCode1
Unveiling the Power of CLIP in Unsupervised Visible-Infrared Person Re-IdentificationCode1
GRENADE: Graph-Centric Language Model for Self-Supervised Representation Learning on Text-Attributed GraphsCode1
Intent Contrastive Learning with Cross Subsequences for Sequential RecommendationCode1
HEProto: A Hierarchical Enhancing ProtoNet based on Multi-Task Learning for Few-shot Named Entity RecognitionCode1
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-SeriesCode1
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal AdapterCode1
PREM: A Simple Yet Effective Approach for Node-Level Graph Anomaly DetectionCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
SimCKP: Simple Contrastive Learning of Keyphrase RepresentationsCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
Rethinking Negative Pairs in Code SearchCode1
Language Models As Semantic IndexersCode1
DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual ScreeningCode1
InfoCL: Alleviating Catastrophic Forgetting in Continual Text Classification from An Information Theoretic PerspectiveCode1
Aligning Language Models with Human Preferences via a Bayesian ApproachCode1
WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather ConditionsCode1
Instances and Labels: Hierarchy-aware Joint Supervised Contrastive Learning for Hierarchical Multi-Label Text ClassificationCode1
Degradation-Aware Self-Attention Based Transformer for Blind Image Super-ResolutionCode1
Certifiably Robust Graph Contrastive LearningCode1
Fragment-based Pretraining and Finetuning on Molecular GraphsCode1
AstroCLIP: A Cross-Modal Foundation Model for GalaxiesCode1
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-trainingCode1
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Towards Distribution-Agnostic Generalized Category DiscoveryCode1
Segment Anything Model is a Good Teacher for Local Feature LearningCode1
Information Flow in Self-Supervised LearningCode1
Beyond Co-occurrence: Multi-modal Session-based RecommendationCode1
Show:102550
← PrevPage 13 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified