SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 51765200 of 6661 papers

TitleStatusHype
Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL0
Language modeling via stochastic processesCode1
Revisiting Domain Generalized Stereo Matching Networks from a Feature Consistency PerspectiveCode1
Partitioning Image Representation in Contrastive Learning0
SimAN: Exploring Self-Supervised Representation Learning of Scene Text via Similarity-Aware NormalizationCode1
Multi-view Multi-behavior Contrastive Learning in RecommendationCode1
Prototypical Verbalizer for Prompt-based Few-shot TuningCode4
Contrastive Learning with Positive-Negative Frame Mask for Music Representation0
Mixing Up Contrastive Learning: Self-Supervised Representation Learning for Time SeriesCode1
CYBORGS: Contrastively Bootstrapping Object Representations by Grounding in SegmentationCode0
Modulated Contrast for Versatile Image SynthesisCode1
PLANET: Dynamic Content Planning in Autoregressive Transformers for Long-form Text Generation0
Contrastive Learning for Cross-Domain Open World RecognitionCode1
Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial TrainingCode0
Weak Augmentation Guided Relational Self-Supervised LearningCode1
QS-Attn: Query-Selected Attention for Contrastive Learning in I2I TranslationCode1
Is it all a cluster game? -- Exploring Out-of-Distribution Detection based on Clustering in the Embedding Space0
Multi-View Dreaming: Multi-View World Model with Contrastive Learning0
Better Quality Estimation for Low Resource Corpus Mining0
Improving Word Translation via Two-Stage Contrastive LearningCode1
InsCon:Instance Consistency Feature Representation via Self-Supervised Learning0
Unpaired Deep Image Dehazing Using Contrastive Disentanglement Learning0
Contrastive Learning of Sociopragmatic Meaning in Social MediaCode0
Supervised Contrastive Learning with Structure Inference for Graph Classification0
Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little CostCode1
Show:102550
← PrevPage 208 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified