SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 626650 of 6661 papers

TitleStatusHype
Intent Contrastive Learning with Cross Subsequences for Sequential RecommendationCode1
HEProto: A Hierarchical Enhancing ProtoNet based on Multi-Task Learning for Few-shot Named Entity RecognitionCode1
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-SeriesCode1
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal AdapterCode1
PREM: A Simple Yet Effective Approach for Node-Level Graph Anomaly DetectionCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
SimCKP: Simple Contrastive Learning of Keyphrase RepresentationsCode1
Enhancing Text-based Knowledge Graph Completion with Zero-Shot Large Language Models: A Focus on Semantic EnhancementCode1
Rethinking Negative Pairs in Code SearchCode1
Language Models As Semantic IndexersCode1
DrugCLIP: Contrastive Protein-Molecule Representation Learning for Virtual ScreeningCode1
InfoCL: Alleviating Catastrophic Forgetting in Continual Text Classification from An Information Theoretic PerspectiveCode1
Aligning Language Models with Human Preferences via a Bayesian ApproachCode1
WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather ConditionsCode1
Instances and Labels: Hierarchy-aware Joint Supervised Contrastive Learning for Hierarchical Multi-Label Text ClassificationCode1
Degradation-Aware Self-Attention Based Transformer for Blind Image Super-ResolutionCode1
Certifiably Robust Graph Contrastive LearningCode1
Fragment-based Pretraining and Finetuning on Molecular GraphsCode1
AstroCLIP: A Cross-Modal Foundation Model for GalaxiesCode1
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-trainingCode1
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Towards Distribution-Agnostic Generalized Category DiscoveryCode1
Segment Anything Model is a Good Teacher for Local Feature LearningCode1
Information Flow in Self-Supervised LearningCode1
Beyond Co-occurrence: Multi-modal Session-based RecommendationCode1
Show:102550
← PrevPage 26 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified