SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 801825 of 6661 papers

TitleStatusHype
CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive LearningCode1
CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image FusionCode1
Contrastive Learning with Hard Negative SamplesCode1
Contrastive Learning with Hard Negative Entities for Entity Set ExpansionCode1
Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy DetectionCode1
Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence EmbeddingCode1
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive LearningCode1
BCE-Net: Reliable Building Footprints Change Extraction based on Historical Map and Up-to-Date Images using Contrastive LearningCode1
A Self-supervised Method for Entity AlignmentCode1
CoCon: Cooperative-Contrastive LearningCode1
COCO-LM: Correcting and Contrasting Text Sequences for Language Model PretrainingCode1
Fair Contrastive Learning for Facial Attribute ClassificationCode1
Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity RecognitionCode1
FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANsCode1
CoCoNets: Continuous Contrastive 3D Scene RepresentationsCode1
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence EncodersCode1
CIC: Contrastive Intrinsic Control for Unsupervised Skill DiscoveryCode1
Feature Representation Learning for Unsupervised Cross-domain Image RetrievalCode1
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical ReasoningCode1
FedIIC: Towards Robust Federated Learning for Class-Imbalanced Medical Image ClassificationCode1
A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence EmbeddingsCode1
FedX: Unsupervised Federated Learning with Cross Knowledge DistillationCode1
CLDG: Contrastive Learning on Dynamic GraphsCode1
FiGURe: Simple and Efficient Unsupervised Node Representations with Filter AugmentationsCode1
A Broad Study on the Transferability of Visual Representations with Contrastive LearningCode1
Show:102550
← PrevPage 33 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified