SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 901925 of 6661 papers

TitleStatusHype
CLIP-Event: Connecting Text and Images with Event StructuresCode1
Asymmetric Patch Sampling for Contrastive LearningCode1
CLIP-guided Federated Learning on Heterogeneous and Long-Tailed DataCode1
ICE: Inter-instance Contrastive Encoding for Unsupervised Person Re-identificationCode1
CLIP-KD: An Empirical Study of CLIP Model DistillationCode1
CLIP-Lite: Information Efficient Visual Representation Learning with Language SupervisionCode1
CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive LearningCode1
Deep Image Clustering with Contrastive Learning and Multi-scale Graph Convolutional NetworksCode1
Image Quality Assessment using Contrastive LearningCode1
Image-Text Co-Decomposition for Text-Supervised Semantic SegmentationCode1
Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial RobustnessCode1
Improved Baselines with Momentum Contrastive LearningCode1
Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme DetectionCode1
Improving Antibody Humanness Prediction using Patent DataCode1
Improving Contrastive Learning by Visualizing Feature TransformationCode1
Improving Contrastive Learning of Sentence Embeddings with Case-Augmented Positives and Retrieved NegativesCode1
Improving Contrastive Learning on Imbalanced Seed Data via Open-World SamplingCode1
Improving Contrastive Learning on Imbalanced Data via Open-World SamplingCode1
CLMLF:A Contrastive Learning and Multi-Layer Fusion Method for Multimodal Sentiment DetectionCode1
Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive LearningCode1
CL-MVSNet: Unsupervised Multi-View Stereo with Dual-Level Contrastive LearningCode1
Improving Knowledge-aware Recommendation with Multi-level Interactive Contrastive LearningCode1
Improving Self-Supervised Learning by Characterizing Idealized RepresentationsCode1
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and PatientsCode1
BatchSampler: Sampling Mini-Batches for Contrastive Learning in Vision, Language, and GraphsCode1
Show:102550
← PrevPage 37 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified