SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 201225 of 6661 papers

TitleStatusHype
Learning Commonality, Divergence and Variety for Unsupervised Visible-Infrared Person Re-identificationCode2
PromptBERT: Improving BERT Sentence Embeddings with PromptsCode2
C2AM: Contrastive Learning of Class-Agnostic Activation Map for Weakly Supervised Object Localization and Semantic SegmentationCode2
QDTrack: Quasi-Dense Similarity Learning for Appearance-Only Multiple Object TrackingCode2
RAR: Retrieving And Ranking Augmented MLLMs for Visual RecognitionCode2
Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion PriorsCode2
Rethinking Visual Geo-localization for Large-Scale ApplicationsCode2
ReVersion: Diffusion-Based Relation Inversion from ImagesCode2
Robust and Reliable Early-Stage Website Fingerprinting Attacks via Spatial-Temporal Distribution AnalysisCode2
RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language ModelsCode2
Contrastive Audio-Visual Masked AutoencoderCode2
CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series ForecastingCode2
Self-Supervised Any-Point Tracking by Contrastive Random WalksCode2
Self-Supervised Contrastive Learning for Long-term ForecastingCode2
DeTeCtive: Detecting AI-generated Text via Multi-Level Contrastive LearningCode2
A Multi-Modal Contrastive Diffusion Model for Therapeutic Peptide GenerationCode1
AASAE: Augmentation-Augmented Stochastic AutoencodersCode1
Clustering-Aware Negative Sampling for Unsupervised Sentence RepresentationCode1
CLUDA : Contrastive Learning in Unsupervised Domain Adaptation for Semantic SegmentationCode1
Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch MiningCode1
Cluster-guided Contrastive Graph Clustering NetworkCode1
Cluster-Level Contrastive Learning for Emotion Recognition in ConversationsCode1
CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and PatientsCode1
CL-MVSNet: Unsupervised Multi-View Stereo with Dual-Level Contrastive LearningCode1
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIPCode1
Show:102550
← PrevPage 9 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified