SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 176200 of 6661 papers

TitleStatusHype
Detecting and Grounding Multi-Modal Media ManipulationCode2
BEVLoc: Cross-View Localization and Matching via Birds-Eye-View SynthesisCode2
Contrastive Learning of Asset Embeddings from Financial Time SeriesCode2
Latent Guard: a Safety Framework for Text-to-image GenerationCode2
Learn From Zoom: Decoupled Supervised Contrastive Learning For WCE Image ClassificationCode2
Learning To Describe Player Form in The MLBCode2
Learning Vision from Models Rivals Learning Vision from DataCode2
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in AlignmentCode2
Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive LearningCode2
Contrastive learning of cell state dynamics in response to perturbationsCode2
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language ModelsCode2
SoftCoT++: Test-Time Scaling with Soft Chain-of-Thought ReasoningCode2
MedCLIP: Contrastive Learning from Unpaired Medical Images and TextCode2
Contrastive Learning for Unpaired Image-to-Image TranslationCode2
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic SegmentationCode2
Content-Based Search for Deep Generative ModelsCode2
CoNT: Contrastive Neural Text GenerationCode2
A Comprehensive Survey on Self-Supervised Learning for RecommendationCode2
Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local SimilaritiesCode2
NeuroNet: A Novel Hybrid Self-Supervised Learning Framework for Sleep Stage Classification Using Single-Channel EEGCode2
An Experimental Study on Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-TrainingCode2
One Train for Two Tasks: An Encrypted Traffic Classification Framework Using Supervised Contrastive LearningCode2
One Trajectory, One Token: Grounded Video Tokenization via Panoptic Sub-object TrajectoryCode2
CLIP-Art: Contrastive Pre-training for Fine-Grained Art ClassificationCode2
Contrastive Audio-Visual Masked AutoencoderCode2
Show:102550
← PrevPage 8 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified