SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 28512900 of 6661 papers

TitleStatusHype
WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather ConditionsCode1
Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift0
Transferable Availability Poisoning AttacksCode0
SemST: Semantically Consistent Multi-Scale Image Translation via Structure-Texture Alignment0
Instances and Labels: Hierarchy-aware Joint Supervised Contrastive Learning for Hierarchical Multi-Label Text ClassificationCode1
Boosting Facial Action Unit Detection Through Jointly Learning Facial Landmark Detection and Domain Separation and Reconstruction0
Integrating Contrastive Learning into a Multitask Transformer Model for Effective Domain Adaptation0
Unbiased and Robust: External Attention-enhanced Graph Contrastive Learning for Cross-domain Sequential RecommendationCode0
Towards Dynamic and Small Objects Refinement for Unsupervised Domain Adaptative Nighttime Semantic Segmentation0
Degradation-Aware Self-Attention Based Transformer for Blind Image Super-ResolutionCode1
CUPre: Cross-domain Unsupervised Pre-training for Few-Shot Cell Segmentation0
Perfect Alignment May be Poisonous to Graph Contrastive LearningCode0
Fragment-based Pretraining and Finetuning on Molecular GraphsCode1
Certifiably Robust Graph Contrastive LearningCode1
Diffusion Models as Masked Audio-Video Learners0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Deep Variational Multivariate Information Bottleneck -- A Framework for Variational Losses0
Beyond Random Augmentations: Pretraining with Hard ViewsCode0
Cold-start Bundle Recommendation via Popularity-based Coalescence and Curriculum HeatingCode0
TPDR: A Novel Two-Step Transformer-based Product and Class Description Match and Retrieval Method0
PrototypeFormer: Learning to Explore Prototype Relationships for Few-shot Image Classification0
Multimodal Prompt Transformer with Hybrid Contrastive Learning for Emotion Recognition in Conversation0
Inclusive Data Representation in Federated Learning: A Novel Approach Integrating Textual and Visual Prompt0
AstroCLIP: A Cross-Modal Foundation Model for GalaxiesCode1
Continual Contrastive Spoken Language Understanding0
Co-modeling the Sequential and Graphical Routes for Peptide Representation LearningCode0
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Prompting Audios Using Acoustic Properties For Emotion Representation0
Understanding Masked Autoencoders From a Local Contrastive Perspective0
OOD Aware Supervised Contrastive Learning0
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-trainingCode1
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic AlignmentCode4
Towards Distribution-Agnostic Generalized Category DiscoveryCode1
An Investigation of Representation and Allocation Harms in Contrastive LearningCode0
Self-supervised Learning for Anomaly Detection in Computational Workflows0
A Task-oriented Dialog Model with Task-progressive and Policy-aware Pre-trainingCode0
Siamese Representation Learning for Unsupervised Relation ExtractionCode0
TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation0
Decoding Realistic Images from Brain Activity with Contrastive Self-supervision and Latent Diffusion0
Structural Adversarial Objectives for Self-Supervised Representation LearningCode0
MuSe-GNN: Learning Unified Gene Representation From Multimodal Biological Graph DataCode1
SCoRe: Submodular Combinatorial Representation Learning0
RSAM: Learning on manifolds with Riemannian Sharpness-aware Minimization0
Region-centric Image-Language Pretraining for Open-Vocabulary DetectionCode0
Information Flow in Self-Supervised LearningCode1
Beyond Co-occurrence: Multi-modal Session-based RecommendationCode1
Segment Anything Model is a Good Teacher for Local Feature LearningCode1
ComSD: Balancing Behavioral Quality and Diversity in Unsupervised Skill DiscoveryCode0
CtxMIM: Context-Enhanced Masked Image Modeling for Remote Sensing Image Understanding0
3D-Mol: A Novel Contrastive Learning Framework for Molecular Property Prediction with 3D Information0
Show:102550
← PrevPage 58 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified