SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 57015725 of 6661 papers

TitleStatusHype
Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation LearningCode0
CoCoSoDa: Effective Contrastive Learning for Code Search0
Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations0
Learning from Untrimmed Videos: Self-Supervised Video Representation Learning with Hierarchical Consistency0
Detail-recovery Image Deraining via Dual Sample-augmented Contrastive LearningCode0
Hierarchical Self-supervised Representation Learning for Movie Understanding0
Beyond Separability: Analyzing the Linear Transferability of Contrastive Representations to Related Subpopulations0
A Transformer-Based Contrastive Learning Approach for Few-Shot Sign Language Recognition0
Transient motion classification through turbid volumes via parallelized single-photon detection and deep contrastive embedding0
Estimating Fine-Grained Noise Model via Contrastive Learning0
Bayesian Negative Sampling for RecommendationCode0
A Dual-Contrastive Framework for Low-Resource Cross-Lingual Named Entity RecognitionCode0
Learning List-wise Representation in Reinforcement Learning for Ads Allocation with Multiple Auxiliary Tasks0
Transformer-Empowered Content-Aware Collaborative Filtering0
CL-XABSA: Contrastive Learning for Cross-lingual Aspect-based Sentiment AnalysisCode0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningCode0
CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object Detection0
Marginal Contrastive Correspondence for Guided Image Generation0
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation0
ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval0
Semantic Pose Verification for Outdoor Visual Localization with Self-supervised Contrastive Learning0
Self-distillation Augmented Masked Autoencoders for Histopathological Image Classification0
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning0
Controllable Augmentations for Video Representation Learning0
Weakly-supervised Temporal Path Representation Learning with Contrastive Curriculum Learning -- Extended VersionCode0
Show:102550
← PrevPage 229 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified