SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 726750 of 6661 papers

TitleStatusHype
DyCON: Dynamic Uncertainty-aware Consistency and Contrastive Learning for Semi-supervised Medical Image Segmentation0
Bringing CLIP to the Clinic: Dynamic Soft Labels and Negation-Aware Learning for Medical Analysis0
Perceptual Inductive Bias Is What You Need Before Contrastive Learning0
Multi-Modal Contrastive Masked Autoencoders: A Two-Stage Progressive Pre-training Approach for RGBD Datasets0
SLADE: Shielding against Dual Exploits in Large Vision-Language Models0
Less Attention is More: Prompt Transformer for Generalized Category DiscoveryCode0
Viewpoint Rosetta Stone: Unlocking Unpaired Ego-Exo Videos for View-invariant Representation Learning0
Alignment, Mining and Fusion: Representation Alignment with Hard Negative Mining and Selective Knowledge Fusion for Medical Visual Question Answering0
SmartCLIP: Modular Vision-language Alignment with Identification GuaranteesCode1
ROLL: Robust Noisy Pseudo-label Learning for Multi-View Clustering with Noisy Correspondence0
Link-based Contrastive Learning for One-Shot Unsupervised Domain Adaptation0
Multi-modal Contrastive Learning with Negative Sampling Calibration for Phenotypic Drug Discovery0
Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements0
Make Domain Shift a Catastrophic Forgetting Alleviator in Class-Incremental Learning0
Phoneme-Level Contrastive Learning for User-Defined Keyword Spotting with Flexible Enrollment0
Unsupervised dense retrieval with conterfactual contrastive learning0
Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation LearningCode1
Hierarchical Banzhaf Interaction for General Video-Language Representation Learning0
EraseAnything: Enabling Concept Erasure in Rectified Flow TransformersCode1
Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning0
Multi-Modality Driven LoRA for Adverse Condition Depth Estimation0
Self-Calibrated Dual Contrasting for Annotation-Efficient Bacteria Raman Spectroscopy Clustering and Classification0
Injecting Explainability and Lightweight Design into Weakly Supervised Video Anomaly Detection Systems0
Neighbor Does Matter: Density-Aware Contrastive Learning for Medical Semi-supervised Segmentation0
NijiGAN: Transform What You See into Anime with Contrastive Semi-Supervised Learning and Neural Ordinary Differential Equations0
Show:102550
← PrevPage 30 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified