SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 56765700 of 6661 papers

TitleStatusHype
Mitigating Dataset Artifacts in Natural Language Inference Through Automatic Contextual Data Augmentation and Learning Optimization0
Mitigating Degree Bias Adaptively with Hard-to-Learn Nodes in Graph Contrastive Learning0
Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations0
Mitigating Human and Computer Opinion Fraud via Contrastive Learning0
Mitigating Out-of-Entity Errors in Named Entity Recognition: A Sentence-Level Strategy0
Mitigating the Inconsistency Between Word Saliency and Model Confidence with Pathological Contrastive Training0
MixCL: Pixel label matters to contrastive learning0
Mixed Preference Optimization: Reinforcement Learning with Data Selection and Better Reference Model0
Mixed Supervised Graph Contrastive Learning for Recommendation0
MixSiam: A Mixture-based Approach to Self-supervised Representation Learning0
MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning0
MLIP: Medical Language-Image Pre-training with Masked Local Representation Learning0
ML-LMCL: Mutual Learning and Large-Margin Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding0
MMBind: Unleashing the Potential of Distributed and Heterogeneous Data for Multimodal Learning in IoT0
Multilingual Molecular Representation Learning via Contrastive Pre-training0
MMGSD: Multi-Modal Gaussian Shape Descriptors for Correspondence Matching in 1D and 2D Deformable Objects0
m-mix: Generating hard negatives via multiple samples mixing for contrastive learning0
MN-Pair Contrastive Damage Representation and Clustering for Prognostic Explanation0
MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation0
MoCLIM: Towards Accurate Cancer Subtyping via Multi-Omics Contrastive Learning with Omics-Inference Modeling0
MoCoKGC: Momentum Contrast Entity Encoding for Knowledge Graph Completion0
MoCo-Transfer: Investigating out-of-distribution contrastive learning for limited-data domains0
Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration0
ModEFormer: Modality-Preserving Embedding for Audio-Video Synchronization using Transformers0
Model and Evaluation: Towards Fairness in Multilingual Text Classification0
Show:102550
← PrevPage 228 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified