SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 26262650 of 6661 papers

TitleStatusHype
Languages Transferred Within the Encoder: On Representation Transfer in Zero-Shot Multilingual TranslationCode0
Beyond Contrastive Learning: Synthetic Data Enables List-wise Training with Multiple Levels of RelevanceCode0
Large Language Models Meet Contrastive Learning: Zero-Shot Emotion Recognition Across LanguagesCode0
Extremely Fine-Grained Visual Classification over Resembling Glyphs in the WildCode0
Learning Contrastive Feature Representations for Facial Action Unit DetectionCode0
Fine-Grained Representation Learning via Multi-Level Contrastive Learning without Class PriorsCode0
Language Agnostic Multilingual Information Retrieval with Contrastive LearningCode0
SimCLF: A Simple Contrastive Learning Framework for Function-level Binary EmbeddingsCode0
Contrastive Learning for Task-Independent SpeechLLM-PretrainingCode0
CoDeGAN: Contrastive Disentanglement for Generative Adversarial NetworkCode0
Label Refinement via Contrastive Learning for Distantly-Supervised Named Entity RecognitionCode0
Label Structure Preserving Contrastive Embedding for Multi-Label Learning with Missing LabelsCode0
Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential RecommendationCode0
LACMA: Language-Aligning Contrastive Learning with Meta-Actions for Embodied Instruction FollowingCode0
Exploring the Trade-off Between Model Performance and Explanation Plausibility of Text Classifiers Using Human RationalesCode0
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor AttacksCode0
Model Steering: Learning with a Reference Model Improves Generalization Bounds and Scaling LawsCode0
L^2CL: Embarrassingly Simple Layer-to-Layer Contrastive Learning for Graph Collaborative FilteringCode0
Fuzzy Cluster-Aware Contrastive Clustering for Time SeriesCode0
Exploring the Effectiveness of Multi-stage Fine-tuning for Cross-encoder Re-rankersCode0
GraphLearner: Graph Node Clustering with Fully Learnable AugmentationCode0
Label-aware Hard Negative Sampling Strategies with Momentum Contrastive Learning for Implicit Hate Speech DetectionCode0
Exploring Semantic Consistency in Unpaired Image Translation to Generate Data for Surgical ApplicationsCode0
MolPLA: A Molecular Pretraining Framework for Learning Cores, R-Groups and their Linker JointsCode0
Knowledge-aware Dual-side Attribute-enhanced RecommendationCode0
Show:102550
← PrevPage 106 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified