SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 52765300 of 6661 papers

TitleStatusHype
Incorporating granularity bias as the margin into contrastive loss for video captioning0
Incremental False Negative Detection for Contrastive Learning0
Indoor Smartphone SLAM with Learned Echoic Location Features0
Inductive-Biases for Contrastive Learning of Disentangled Representations0
INDUS: Effective and Efficient Language Models for Scientific Applications0
Info3D: Representation Learning on 3D Objects using Mutual Information Maximization and Contrastive Learning0
InfoGCL: Information-Aware Graph Contrastive Learning0
InfoNCE: Identifying the Gap Between Theory and Practice0
Information-Aware Time Series Meta-Contrastive Learning0
Information fusion strategy integrating pre-trained language model and contrastive learning for materials knowledge mining0
Information-guided pixel augmentation for pixel-wise contrastive learning0
Information Maximization for Extreme Pose Face Recognition0
Addressing Feature Suppression in Unsupervised Visual Representations0
Information Theory-Guided Heuristic Progressive Multi-View Coding0
Information Theory-Guided Heuristic Progressive Multi-View Coding0
Inherit with Distillation and Evolve with Contrast: Exploring Class Incremental Semantic Segmentation Without Exemplar Memory0
Injecting Explainability and Lightweight Design into Weakly Supervised Video Anomaly Detection Systems0
Injecting Text in Self-Supervised Speech Pretraining0
Injecting Wiktionary to improve token-level contextual representations using contrastive learning0
InsCon:Instance Consistency Feature Representation via Self-Supervised Learning0
InsertionNet 2.0: Minimal Contact Multi-Step Insertion Using Multimodal Multiview Sensory Input0
Instance Adaptive Prototypical Contrastive Embedding for Generalized Zero Shot Learning0
Instance Paradigm Contrastive Learning for Domain Generalization0
Instance-Prototype Affinity Learning for Non-Exemplar Continual Graph Learning0
Instance Segmentation with Cross-Modal Consistency0
Show:102550
← PrevPage 212 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified