SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 151200 of 6661 papers

TitleStatusHype
GestureDiffuCLIP: Gesture Diffusion Model with CLIP LatentsCode2
ReVersion: Diffusion-Based Relation Inversion from ImagesCode2
Automated Self-Supervised Learning for RecommendationCode2
A Systematic Study of Joint Representation Learning on Protein Sequences and StructuresCode2
Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature MimickingCode2
Extended Agriculture-Vision: An Extension of a Large Aerial Image Dataset for Agricultural Pattern AnalysisCode2
Multimodal Industrial Anomaly Detection via Hybrid FusionCode2
Language-Driven Representation Learning for RoboticsCode2
Multi-Modal Self-Supervised Learning for RecommendationCode2
LightGCL: Simple Yet Effective Graph Contrastive Learning for RecommendationCode2
Multi-modal Molecule Structure-text Model for Text-based Retrieval and EditingCode2
PLA: Language-Driven Open-Vocabulary 3D Scene UnderstandingCode2
Semi-Supervised Confidence-Level-based Contrastive Discrimination for Class-Imbalanced Semantic SegmentationCode2
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion RecognitionCode2
Contrastive Search Is What You Need For Neural Text GenerationCode2
Multi-View Reasoning: Consistent Contrastive Learning for Math Word ProblemCode2
MedCLIP: Contrastive Learning from Unpaired Medical Images and TextCode2
QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple Object TrackingCode2
Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive LearningCode2
Content-Based Search for Deep Generative ModelsCode2
When and why vision-language models behave like bags-of-words, and what to do about it?Code2
Contrastive Audio-Visual Masked AutoencoderCode2
Generalized Parametric Contrastive LearningCode2
XSimGCL: Towards Extremely Simple Graph Contrastive Learning for RecommendationCode2
Decoding speech perception from non-invasive brain recordingsCode2
Self-supervised Contrastive Representation Learning for Semi-supervised Time-Series ClassificationCode2
In Defense of Online Models for Video Instance SegmentationCode2
Few-Shot Scene Classification of Optical Remote Sensing Images Leveraging Calibrated Pretext TasksCode2
Exploring Contrastive Learning for Multimodal Detection of Misogynistic MemesCode2
Enhancing Multi-view Stereo with Contrastive Matching and Weighted Focal LossCode2
Egocentric Video-Language PretrainingCode2
Cross-lingual and Multilingual CLIPCode2
CoNT: Contrastive Neural Text GenerationCode2
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature DistillationCode2
GraphMAE: Self-Supervised Masked Graph AutoencodersCode2
CLIP-Art: Contrastive Pre-training for Fine-Grained Art ClassificationCode2
DiffCSE: Difference-based Contrastive Learning for Sentence EmbeddingsCode2
Contrastive language and vision learning of general fashion conceptsCode2
Unified Contrastive Learning in Image-Text-Label SpaceCode2
Rethinking Visual Geo-localization for Large-Scale ApplicationsCode2
Large-Scale Pre-training for Person Re-identification with Noisy LabelsCode2
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic SegmentationCode2
R3M: A Universal Visual Representation for Robot ManipulationCode2
Protein Representation Learning by Geometric Structure PretrainingCode2
SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language ModelsCode2
BatchFormer: Learning to Explore Sample Relationships for Robust Representation LearningCode2
CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud UnderstandingCode2
Vision-Language Pre-Training with Triple Contrastive LearningCode2
A Self-Supervised Descriptor for Image Copy DetectionCode2
Inter-subject Contrastive Learning for Subject Adaptive EEG-based Visual RecognitionCode2
Show:102550
← PrevPage 4 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified