SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 28762900 of 6661 papers

TitleStatusHype
Co-modeling the Sequential and Graphical Routes for Peptide Representation LearningCode0
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
Prompting Audios Using Acoustic Properties For Emotion Representation0
Understanding Masked Autoencoders From a Local Contrastive Perspective0
OOD Aware Supervised Contrastive Learning0
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-trainingCode1
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic AlignmentCode4
Towards Distribution-Agnostic Generalized Category DiscoveryCode1
An Investigation of Representation and Allocation Harms in Contrastive LearningCode0
Self-supervised Learning for Anomaly Detection in Computational Workflows0
A Task-oriented Dialog Model with Task-progressive and Policy-aware Pre-trainingCode0
Siamese Representation Learning for Unsupervised Relation ExtractionCode0
TDCGL: Two-Level Debiased Contrastive Graph Learning for Recommendation0
Decoding Realistic Images from Brain Activity with Contrastive Self-supervision and Latent Diffusion0
Structural Adversarial Objectives for Self-Supervised Representation LearningCode0
MuSe-GNN: Learning Unified Gene Representation From Multimodal Biological Graph DataCode1
SCoRe: Submodular Combinatorial Representation Learning0
RSAM: Learning on manifolds with Riemannian Sharpness-aware Minimization0
Region-centric Image-Language Pretraining for Open-Vocabulary DetectionCode0
Information Flow in Self-Supervised LearningCode1
Beyond Co-occurrence: Multi-modal Session-based RecommendationCode1
Segment Anything Model is a Good Teacher for Local Feature LearningCode1
ComSD: Balancing Behavioral Quality and Diversity in Unsupervised Skill DiscoveryCode0
CtxMIM: Context-Enhanced Masked Image Modeling for Remote Sensing Image Understanding0
3D-Mol: A Novel Contrastive Learning Framework for Molecular Property Prediction with 3D Information0
Show:102550
← PrevPage 116 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified