SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 56515700 of 6661 papers

TitleStatusHype
Metadata-enhanced contrastive learning from retinal optical coherence tomography images0
Meta-node: A Concise Approach to Effectively Learn Complex Relationships in Heterogeneous Graphs0
Meta-optimized Joint Generative and Contrastive Learning for Sequential Recommendation0
Metapath-based Hyperbolic Contrastive Learning for Heterogeneous Graph Embedding0
Meta-ZSDETR: Zero-shot DETR with Meta-learning0
Metric-based multimodal meta-learning for human movement identification via footstep recognition0
Metric Learning for 3D Point Clouds Using Optimal Transport0
MFF-FTNet: Multi-scale Feature Fusion across Frequency and Temporal Domains for Time Series Forecasting0
MGI: Multimodal Contrastive pre-training of Genomic and Medical Imaging0
MGS3: A Multi-Granularity Self-Supervised Code Search Framework0
mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval0
Micro-Expression Recognition Based on Attribute Information Embedding and Cross-modal Contrastive Learning0
MIM4DD: Mutual Information Maximization for Dataset Distillation0
MimCo: Masked Image Modeling Pre-training with Contrastive Teacher0
MIMIC: Mask Image Pre-training with Mix Contrastive Fine-tuning for Facial Expression Recognition0
Mind Your Clever Neighbours: Unsupervised Person Re-identification via Adaptive Clustering Relationship Modeling0
Mining Better Samples for Contrastive Learning of Temporal Correspondence0
Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction0
MIN: Multi-channel Interaction Network for Drug-Target Interaction with Protein Distillation0
MIO : Mutual Information Optimization using Self-Supervised Binary Contrastive Learning0
Misinformation Detection in Social Media Video Posts0
MISS: Multi-Interest Self-Supervised Learning Framework for Click-Through Rate Prediction0
MiSS@WMT21: Contrastive Learning-reinforced Domain Adaptation in Neural Machine Translation0
Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion0
Mitigating Contradictions in Dialogue Based on Contrastive Learning0
Mitigating Dataset Artifacts in Natural Language Inference Through Automatic Contextual Data Augmentation and Learning Optimization0
Mitigating Degree Bias Adaptively with Hard-to-Learn Nodes in Graph Contrastive Learning0
Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations0
Mitigating Human and Computer Opinion Fraud via Contrastive Learning0
Mitigating Out-of-Entity Errors in Named Entity Recognition: A Sentence-Level Strategy0
Mitigating the Inconsistency Between Word Saliency and Model Confidence with Pathological Contrastive Training0
MixCL: Pixel label matters to contrastive learning0
Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model0
Mixed Supervised Graph Contrastive Learning for Recommendation0
MixSiam: A Mixture-based Approach to Self-supervised Representation Learning0
MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning0
MLIP: Medical Language-Image Pre-training with Masked Local Representation Learning0
ML-LMCL: Mutual Learning and Large-Margin Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding0
MMBind: Unleashing the Potential of Distributed and Heterogeneous Data for Multimodal Learning in IoT0
Multilingual Molecular Representation Learning via Contrastive Pre-training0
MMGSD: Multi-Modal Gaussian Shape Descriptors for Correspondence Matching in 1D and 2D Deformable Objects0
m-mix: Generating hard negatives via multiple samples mixing for contrastive learning0
MN-Pair Contrastive Damage Representation and Clustering for Prognostic Explanation0
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation0
MoCLIM: Towards Accurate Cancer Subtyping via Multi-Omics Contrastive Learning with Omics-Inference Modeling0
MoCoKGC: Momentum Contrast Entity Encoding for Knowledge Graph Completion0
MoCo-Transfer: Investigating out-of-distribution contrastive learning for limited-data domains0
Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration0
ModEFormer: Modality-Preserving Embedding for Audio-Video Synchronization using Transformers0
Model and Evaluation: Towards Fairness in Multilingual Text Classification0
Show:102550
← PrevPage 114 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified