SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 23262350 of 6661 papers

TitleStatusHype
Classification and Clustering of Sentence-Level Embeddings of Scientific Articles Generated by Contrastive Learning0
A Generic Method for Fine-grained Category Discovery in Natural Language Texts0
Goal-conditioned reinforcement learning for ultrasound navigation guidance0
DATA: Multi-Disentanglement based Contrastive Learning for Open-World Semi-Supervised Deepfake Attribution0
CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks0
A General-Purpose Transferable Predictor for Neural Architecture Search0
Classes Are Not Equal: An Empirical Study on Image Recognition Fairness0
Supervised Graph Contrastive Learning for Few-shot Node Classification0
Data-Efficient Contrastive Learning by Differentiable Hard Sample and Hard Positive Pair Generation0
Data curation via joint example selection further accelerates multimodal learning0
Class-aware and Augmentation-free Contrastive Learning from Label Proportion0
3D Scene Graph Guided Vision-Language Pre-training0
GMM-Based Comprehensive Feature Extraction and Relative Distance Preservation For Few-Shot Cross-Modal Retrieval0
Bi-level Contrastive Learning for Knowledge-Enhanced Molecule Representations0
Gradient-guided Unsupervised Text Style Transfer via Contrastive Learning0
Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels0
CLaSP: Learning Concepts for Time-Series Signals from Natural Language Supervision0
Data Augmentation of Contrastive Learning is Estimating Positive-incentive Noise0
Multi-Variant Consistency based Self-supervised Learning for Robust Automatic Speech Recognition0
ClarityEthic: Explainable Moral Judgment Utilizing Contrastive Ethical Insights from Large Language Models0
Data Adaptive Traceback for Vision-Language Foundation Models in Image Classification0
DashCLIP: Leveraging multimodal models for generating semantic embeddings for DoorDash0
A General Purpose Supervisory Signal for Embodied Agents0
DARTS: A Dual-View Attack Framework for Targeted Manipulation in Federated Sequential Recommendation0
DART: Disease-aware Image-Text Alignment and Self-correcting Re-alignment for Trustworthy Radiology Report Generation0
Show:102550
← PrevPage 94 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified