SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 49264950 of 6661 papers

TitleStatusHype
Improving Long Tailed Document-Level Relation Extraction via Easy Relation Augmentation and Contrastive Learning0
Robust Task-Oriented Dialogue Generation with Contrastive Pre-training and Adversarial Filtering0
Data Augmentation for Compositional Data: Advancing Predictive Models of the MicrobiomeCode0
Self-Supervised Time Series Representation Learning via Cross Reconstruction TransformerCode1
Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity RecognitionCode1
What's Behind the Mask: Understanding Masked Graph Modeling for Graph AutoencodersCode6
Label-invariant Augmentation for Semi-Supervised Graph Classification0
A graph-transformer for whole slide image classificationCode1
A Simple yet Effective Relation Information Guided Approach for Few-Shot Relation ExtractionCode1
Personalized Prompt for Sequential Recommendation0
RankGen: Improving Text Generation with Large Ranking ModelsCode1
Free Lunch for Surgical Video Understanding by Distilling Self-SupervisionsCode1
Domain Enhanced Arbitrary Image Style Transfer via Contrastive LearningCode1
Masked Image Modeling with Denoising ContrastCode1
Improving Micro-video Recommendation via Contrastive Multiple InterestsCode0
CREATER: CTR-driven Advertising Text Generation with Controlled Pre-Training and Contrastive Fine-Tuning0
Relation Extraction with Weighted Contrastive Pre-training on Distant SupervisionCode0
Attention-aware contrastive learning for predicting T cell receptor-antigen binding specificity0
Dynamic Recognition of Speakers for Consent Management by Contrastive Embedding Replay0
A two-steps approach to improve the performance of Android malware detectors0
FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive SummarizationCode1
Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and IsotropizationCode1
Toward a Geometrical Understanding of Self-supervised Contrastive Learning0
Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language ModelsCode0
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning0
Show:102550
← PrevPage 198 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified