SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 276300 of 6661 papers

TitleStatusHype
Hierarchical Consensus Network for Multiview Feature LearningCode1
T-SCEND: Test-time Scalable MCTS-enhanced Diffusion ModelCode1
CycleGuardian: A Framework for Automatic RespiratorySound classification Based on Improved Deep clustering and Contrastive LearningCode1
Prostate-Specific Foundation Models for Enhanced Detection of Clinically Significant CancerCode1
Hierarchical Time-Aware Mixture of Experts for Multi-Modal Sequential RecommendationCode1
Low-rank Prompt Interaction for Continual Vision-Language RetrievalCode1
Leveraging Textual Anatomical Knowledge for Class-Imbalanced Semi-Supervised Multi-Organ SegmentationCode1
MixRec: Individual and Collective Mixing Empowers Data Augmentation for Recommender SystemsCode1
Assisting Mathematical Formalization with A Learning-based Premise RetrieverCode1
MedFILIP: Medical Fine-grained Language-Image Pre-trainingCode1
LD-DETR: Loop Decoder DEtection TRansformer for Video Moment Retrieval and Highlight DetectionCode1
AIRCHITECT v2: Learning the Hardware Accelerator Design Space through Unified RepresentationsCode1
A Simple Graph Contrastive Learning Framework for Short Text ClassificationCode1
Towards Robust and Realistic Human Pose Estimation via WiFi SignalsCode1
Uncertainty-aware Knowledge TracingCode1
AD-L-JEPA: Self-Supervised Spatial World Models with Joint Embedding Predictive Architecture for Autonomous Driving with LiDAR DataCode1
Are They the Same? Exploring Visual Correspondence Shortcomings of Multimodal LLMsCode1
Dual-level Adaptive Incongruity-enhanced Model for Multimodal Sarcasm DetectionCode1
Watch Video, Catch Keyword: Context-aware Keyword Attention for Moment Retrieval and Highlight DetectionCode1
Multimodal Contrastive Representation Learning in Augmented Biomedical Knowledge GraphsCode1
MADGEN: Mass-Spec attends to De Novo Molecular generationCode1
Relation3D : Enhancing Relation Modeling for Point Cloud Instance SegmentationCode1
SmartCLIP: Modular Vision-language Alignment with Identification GuaranteesCode1
Frequency-Masked Embedding Inference: A Non-Contrastive Approach for Time Series Representation LearningCode1
EraseAnything: Enabling Concept Erasure in Rectified Flow TransformersCode1
Show:102550
← PrevPage 12 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified