SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 24262450 of 6661 papers

TitleStatusHype
A Dual-Contrastive Framework for Low-Resource Cross-Lingual Named Entity RecognitionCode0
Less Attention is More: Prompt Transformer for Generalized Category DiscoveryCode0
Bootstrapping Informative Graph Augmentation via A Meta Learning ApproachCode0
An Experimental Comparison Of Multi-view Self-supervised Methods For Music TaggingCode0
Leave No One Behind: Online Self-Supervised Self-Distillation for Sequential RecommendationCode0
Length is a Curse and a Blessing for Document-level SemanticsCode0
Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors SupervisionCode0
Learning with Open-world Noisy Data via Class-independent Margin in Dual Representation SpaceCode0
Lesion-Aware Contrastive Representation Learning for Histopathology Whole Slide Images AnalysisCode0
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled SamplesCode0
Learning to Plan via Supervised Contrastive Learning and Strategic Interpolation: A Chess Case StudyCode0
Bootstrap Latents of Nodes and Neighbors for Graph Self-Supervised LearningCode0
Learning Transferable Pedestrian Representation from Multimodal Information SupervisionCode0
Boost-RS: Boosted Embeddings for Recommender Systems and its Application to Enzyme-Substrate Interaction PredictionCode0
Learning to Locate Visual Answer in Video Corpus Using QuestionCode0
Learning Tree-Structured Composition of Data AugmentationCode0
Contrastive Learning for Task-Independent SpeechLLM-PretrainingCode0
An Empirical Study of Accuracy-Robustness Tradeoff and Training Efficiency in Self-Supervised LearningCode0
Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive LearningCode0
Contrastive Learning for Sleep Staging based on Inter Subject CorrelationCode0
ACE: Zero-Shot Image to Image Translation via Pretrained Auto-Contrastive-EncoderCode0
Learning the Simplicity of Scattering AmplitudesCode0
Learning Semi-Supervised Medical Image Segmentation from Spatial RegistrationCode0
Boosting Semi-Supervised Scene Text Recognition via Viewing and SummarizingCode0
An efficient framework based on large foundation model for cervical cytopathology whole slide image screeningCode0
Show:102550
← PrevPage 98 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified