SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 58765900 of 6661 papers

TitleStatusHype
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models0
Neuron Platonic Intrinsic Representation From Dynamics Using Contrastive Learning0
Neuro-Symbolic Contrastive Learning for Cross-domain Inference0
NEVLP: Noise-Robust Framework for Efficient Vision-Language Pre-training0
NewsEmbed: Modeling News through Pre-trained Document Representations0
NIDA-CLIFGAN: Natural Infrastructure Damage Assessment through Efficient Classification Combining Contrastive Learning, Information Fusion and Generative Adversarial Networks0
Night-to-Day Translation via Illumination Degradation Disentanglement0
NijiGAN: Transform What You See into Anime with Contrastive Semi-Supervised Learning and Neural Ordinary Differential Equations0
NJUST-KMG at TRAC-2024 Tasks 1 and 2: Offline Harm Potential Identification0
Node Embeddings via Neighbor Embeddings0
Noise-BERT: A Unified Perturbation-Robust Framework with Noise Alignment Pre-training for Noisy Slot Filling Task0
NoiseCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions in Diffusion Models0
Noise Is Also Useful: Negative Correlation-Steered Latent Contrastive Learning0
Noise-Robust Contrastive Learning0
No More Shortcuts: Realizing the Potential of Temporal Self-Supervision0
Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices0
No One Representation to Rule Them All: Overlapping Features of Training Methods0
Not All Documents Are What You Need for Extracting Instruction Tuning Data0
Not All Regions are Worthy to be Distilled: Region-aware Knowledge Distillation Towards Efficient Image-to-Image Translation0
NoteLLM: A Retrievable Large Language Model for Note Recommendation0
Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning0
Novel Class Discovery for Open Set Raga Classification0
Novelty Detection via Contrastive Learning with Negative Data Augmentation0
Novelty Detection with Rotated Contrastive Predictive Coding0
NukesFormers: Unpaired Hyperspectral Image Generation with Non-Uniform Domain Alignment0
Show:102550
← PrevPage 236 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified