SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 101125 of 6661 papers

TitleStatusHype
Delving into Inter-Image Invariance for Unsupervised Visual RepresentationsCode2
GestureDiffuCLIP: Gesture Diffusion Model with CLIP LatentsCode2
A Self-Supervised Descriptor for Image Copy DetectionCode2
Denoising as Adaptation: Noise-Space Domain Adaptation for Image RestorationCode2
Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for RecommendationCode2
GraphMAE: Self-Supervised Masked Graph AutoencodersCode2
Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot ResponseCode2
Improved Canonicalization for Model Agnostic EquivarianceCode2
Detecting and Grounding Multi-Modal Media ManipulationCode2
ECG-Chat: A Large ECG-Language Model for Cardiac Disease DiagnosisCode2
Automated Self-Supervised Learning for RecommendationCode2
Avoiding Shortcuts: Enhancing Channel-Robust Specific Emitter Identification via Single-Source Domain GeneralizationCode2
DATR: Unsupervised Domain Adaptive Detection Transformer with Dataset-Level Adaptation and Prototypical AlignmentCode2
PLA: Language-Driven Open-Vocabulary 3D Scene UnderstandingCode2
CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud UnderstandingCode2
DCdetector: Dual Attention Contrastive Representation Learning for Time Series Anomaly DetectionCode2
4D Contrastive Superflows are Dense 3D Representation LearnersCode2
Large-Scale Pre-training for Person Re-identification with Noisy LabelsCode2
Crafting Better Contrastive Views for Siamese Representation LearningCode2
CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series ForecastingCode2
Cross-lingual and Multilingual CLIPCode2
LightGCL: Simple Yet Effective Graph Contrastive Learning for RecommendationCode2
DecisionNCE: Embodied Multimodal Representations via Implicit Preference LearningCode2
Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic SegmentationCode2
Contrastive learning of cell state dynamics in response to perturbationsCode2
Show:102550
← PrevPage 5 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified