SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 22762300 of 6661 papers

TitleStatusHype
COROLLA: An Efficient Multi-Modality Fusion Framework with Supervised Contrastive Learning for Glaucoma GradingCode0
MarsEclipse at SemEval-2023 Task 3: Multi-Lingual and Multi-Label Framing Detection with Contrastive LearningCode0
Are Existing Out-Of-Distribution Techniques Suitable for Network Intrusion Detection?Code0
Coreference Graph Guidance for Mind-Map GenerationCode0
Adversarial Momentum-Contrastive Pre-TrainingCode0
Mask-Guided Contrastive Attention Model for Person Re-IdentificationCode0
MSA-UNet3+: Multi-Scale Attention UNet3+ with New Supervised Prototypical Contrastive Loss for Coronary DSA Image SegmentationCode0
Copy-Pasting Coherent Depth Regions Improves Contrastive Learning for Urban-Scene SegmentationCode0
Manifold Contrastive Learning with Variational Lie Group OperatorsCode0
COOKIE: Contrastive Cross-Modal Knowledge Sharing Pre-Training for Vision-Language RepresentationCode0
ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography ClassificationCode0
Architecture Matters: Uncovering Implicit Mechanisms in Graph Contrastive LearningCode0
Making the Most of Text Semantics to Improve Biomedical Vision--Language ProcessingCode0
CATALOG: A Camera Trap Language-guided Contrastive Learning ModelCode0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningCode0
A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing ModelsCode0
Conventional Contrastive Learning Often Falls Short: Improving Dense Retrieval with Cross-Encoder Listwise Distillation and Synthetic DataCode0
Machine Unlearning in Hyperbolic vs. Euclidean Multimodal Contrastive Learning: Adapting Alignment Calibration to MERUCode0
CASC-AI: Consensus-aware Self-corrective AI Agents for Noise Cell SegmentationCode0
Controlled Text Generation with Hidden Representation TransformationsCode0
MA-AVT: Modality Alignment for Parameter-Efficient Audio-Visual TransformersCode0
M3ANet: Multi-scale and Multi-Modal Alignment Network for Brain-Assisted Target Speaker ExtractionCode0
Making Pre-trained Language Models Better Continual Few-Shot Relation ExtractorsCode0
Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text ClassificationCode0
Multi-task Pre-training Language Model for Semantic Network CompletionCode0
Show:102550
← PrevPage 92 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified