SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 24012425 of 6661 papers

TitleStatusHype
Lexical Knowledge Internalization for Neural Dialog GenerationCode0
Advancing Brainwave-Based Biometrics: A Large-Scale, Multi-Session EvaluationCode0
Contrastive Learning of Semantic and Visual Representations for Text TrackingCode0
Leveraging Graph Structures to Detect Hallucinations in Large Language ModelsCode0
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled SamplesCode0
Statement-Level Vulnerability Detection: Learning Vulnerability Patterns Through Information Theory and Contrastive LearningCode0
Leveraging Group Classification with Descending Soft Labeling for Deep Imbalanced RegressionCode0
Brain-Aware Replacements for Supervised Contrastive Learning in Detection of Alzheimer's DiseaseCode0
Less is More: Selective Reduction of CT Data for Self-Supervised Pre-Training of Deep Learning Models with Contrastive Learning Improves Downstream Classification PerformanceCode0
An Information Minimization Based Contrastive Learning Model for Unsupervised Sentence Embeddings LearningCode0
Length is a Curse and a Blessing for Document-level SemanticsCode0
Lesion-Aware Contrastive Representation Learning for Histopathology Whole Slide Images AnalysisCode0
Contrastive Learning of General-Purpose Audio RepresentationsCode0
Learning with Open-world Noisy Data via Class-independent Margin in Dual Representation SpaceCode0
Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors SupervisionCode0
Leave No One Behind: Online Self-Supervised Self-Distillation for Sequential RecommendationCode0
Less Attention is More: Prompt Transformer for Generalized Category DiscoveryCode0
Learning Tree-Structured Composition of Data AugmentationCode0
Learning to Plan via Supervised Contrastive Learning and Strategic Interpolation: A Chess Case StudyCode0
CL-MRI: Self-Supervised Contrastive Learning to Improve the Accuracy of Undersampled MRI ReconstructionCode0
Learning to Locate Visual Answer in Video Corpus Using QuestionCode0
Learning Transferable Pedestrian Representation from Multimodal Information SupervisionCode0
Less is More: Multimodal Region Representation via Pairwise Inter-view LearningCode0
Link Prediction with Non-Contrastive LearningCode0
ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography ClassificationCode0
Show:102550
← PrevPage 97 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified