SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 66016650 of 6661 papers

TitleStatusHype
Self-supervised contrastive learning unveils cortical folding pattern linked to prematurityCode0
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based AttacksCode0
Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant PhenotypingCode0
UniGenCoder: Merging Seq2Seq and Seq2Tree Paradigms for Unified Code GenerationCode0
Dynamic Graph Representation with Contrastive Learning for Financial Market Prediction: Integrating Temporal Evolution and Static RelationsCode0
Bayesian Self-Supervised Contrastive LearningCode0
ADA-Net: Attention-Guided Domain Adaptation Network with Contrastive Learning for Standing Dead Tree Segmentation Using Aerial ImageryCode0
Dynamic Contrastive Learning for Time Series RepresentationCode0
AmorLIP: Efficient Language-Image Pretraining via AmortizationCode0
AmCLR: Unified Augmented Learning for Cross-Modal RepresentationsCode0
The Trade-off between Universality and Label Efficiency of Representations from Contrastive LearningCode0
Bayesian Robust Graph Contrastive LearningCode0
DICE: Device-level Integrated Circuits Encoder with Graph Contrastive PretrainingCode0
DWE+: Dual-Way Matching Enhanced Framework for Multimodal Entity LinkingCode0
ComSD: Balancing Behavioral Quality and Diversity in Unsupervised Skill DiscoveryCode0
Self-supervised Multi-modal Training from Uncurated Image and Reports Enables Zero-shot Oversight Artificial Intelligence in RadiologyCode0
DWCL: Dual-Weighted Contrastive Learning for Multi-View ClusteringCode0
DV-FSR: A Dual-View Target Attack Framework for Federated Sequential RecommendationCode0
Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading ComprehensionCode0
Dual-task Mutual Reinforcing Embedded Joint Video Paragraph Retrieval and GroundingCode0
Dual Prototypical Contrastive Learning for Few-shot Semantic SegmentationCode0
UNIMO: Towards Unified-Modal Understanding and Generation via Cross-Modal Contrastive LearningCode0
BANER: Boundary-Aware LLMs for Few-Shot Named Entity RecognitionCode0
Tight PAC-Bayesian Risk Certificates for Contrastive LearningCode0
Tile Compression and Embeddings for Multi-Label Classification in GeoLifeCLEF 2024Code0
Compound Figure Separation of Biomedical Images: Mining Large Datasets for Self-supervised LearningCode0
Self-Supervised Interest Transfer Network via Prototypical Contrastive Learning for RecommendationCode0
Dual-Level Cross-Modal Contrastive ClusteringCode0
Balancing Graph Embedding Smoothness in Self-Supervised Learning via Information-Theoretic DecompositionCode0
Compound Figure Separation of Biomedical Images with Side LossCode0
Mutual Harmony: Sequential Recommendation with Dual Contrastive NetworkCode0
Dual Cluster Contrastive learning for Object Re-IdentificationCode0
UniNL: Aligning Representation Learning with Scoring Function for OOD Detection via Unified Neighborhood LearningCode0
Balancing Embedding Spectrum for RecommendationCode0
Alleviating Sparsity of Open Knowledge Graphs with Ternary Contrastive LearningCode0
Self-Supervised Learning for Videos: A SurveyCode0
Dual Advancement of Representation Learning and Clustering for Sparse and Noisy ImagesCode0
Balanced Multi-Relational Graph ClusteringCode0
DropMix: Better Graph Contrastive Learning with Harder Negative SamplesCode0
Self-Supervised Learning from Contrastive Mixtures for Personalized Speech EnhancementCode0
MLEM: Generative and Contrastive Learning as Distinct Modalities for Event SequencesCode0
Self-supervised learning of audio representations using angular contrastive lossCode0
DoRA: Domain-Based Self-Supervised Learning Framework for Low-Resource Real Estate AppraisalCode0
Learning Temporally Equivariance for Degenerative Disease Progression in OCT by Predicting Future RepresentationsCode0
Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP ModelsCode0
Alleviating Exposure Bias via Multi-level Contrastive Learning and Deviation Simulation in Abstractive SummarizationCode0
Don’t Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention PoolingCode0
Don't Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention PoolingCode0
Composition-contrastive Learning for Sentence EmbeddingsCode0
DomCLP: Domain-wise Contrastive Learning with Prototype Mixup for Unsupervised Domain GeneralizationCode0
Show:102550
← PrevPage 133 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified