SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 53265350 of 6661 papers

TitleStatusHype
Invariance-adapted decomposition and Lasso-type contrastive learning0
Invariant and consistent: Unsupervised representation learning for few-shot visual recognition0
InverTune: Removing Backdoors from Multimodal Contrastive Learning Models via Trigger Inversion and Activation Tuning0
Investigating Data Memorization in 3D Latent Diffusion Models for Medical Image Synthesis0
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition0
Investigating Graph Structure Information for Entity Alignment with Dangling Cases0
End-to-End Lyrics Recognition with Self-supervised Learning0
Investigating Self-Supervised Methods for Label-Efficient Learning0
Investigating the Benefits of Projection Head for Representation Learning0
Investigating the Role of Negatives in Contrastive Representation Learning0
Investigating Why Contrastive Learning Benefits Robustness Against Label Noise0
IROAM: Improving Roadside Monocular 3D Object Detection Learning from Autonomous Vehicle Data Domain0
Is Contrasting All You Need? Contrastive Learning for the Detection and Attribution of AI-generated Text0
Is Cross-modal Information Retrieval Possible without Training?0
"Is depression related to cannabis?": A knowledge-infused model for Entity and Relation Extraction with Limited Supervision0
ISDrama: Immersive Spatial Drama Generation through Multimodal Prompting0
Is it all a cluster game? -- Exploring Out-of-Distribution Detection based on Clustering in the Embedding Space0
Isolating authorship from content with semantic embeddings and contrastive learning0
I Speak and You Find: Robust 3D Visual Grounding with Noisy and Ambiguous Speech Inputs0
Is Self-Supervised Learning More Robust Than Supervised Learning?0
Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning0
Iterated Learning Improves Compositionality in Large Vision-Language Models0
Iterative Bilinear Temporal-Spectral Fusion for Unsupervised Representation Learning in Time Series0
Iterative Graph Self-Distillation0
Iterative Quantum Feature Maps0
Show:102550
← PrevPage 214 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified