SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 18261850 of 6661 papers

TitleStatusHype
In-context Contrastive Learning for Event Causality IdentificationCode1
Automated Radiology Report Generation: A Review of Recent Advances0
Relative Counterfactual Contrastive Learning for Mitigating Pretrained Stance Bias in Stance Detection0
AMCEN: An Attention Masking-based Contrastive Event Network for Two-stage Temporal Knowledge Graph Reasoning0
Enhancing Semantics in Multimodal Chain of Thought via Soft Negative SamplingCode1
HecVL: Hierarchical Video-Language Pretraining for Zero-shot Surgical Phase RecognitionCode2
UniCorn: A Unified Contrastive Learning Approach for Multi-view Molecular Representation Learning0
Learning Generalized Medical Image Representations through Image-Graph Contrastive Pretraining0
Diffusion-based Contrastive Learning for Sequential RecommendationCode1
Factual Serialization Enhancement: A Key Innovation for Chest X-ray Report GenerationCode1
Learning Temporally Equivariance for Degenerative Disease Progression in OCT by Predicting Future RepresentationsCode0
QCRD: Quality-guided Contrastive Rationale Distillation for Large Language Models0
Self-supervised contrastive learning unveils cortical folding pattern linked to prematurityCode0
Dual-level Hypergraph Contrastive Learning with Adaptive Temperature EnhancementCode1
Self-Distillation Improves DNA Sequence InferenceCode0
Efficient Vision-Language Pre-training by Cluster MaskingCode1
RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content0
T3RD: Test-Time Training for Rumor Detection on Social MediaCode0
Fine-tuning the SwissBERT Encoder Model for Embedding Sentences and Documents0
A Supervised Information Enhanced Multi-Granularity Contrastive Learning Framework for EEG Based Emotion RecognitionCode1
CoViews: Adaptive Augmentation Using Cooperative Views for Enhanced Contrastive Learning0
Machine Unlearning in Contrastive Learning0
PCLMix: Weakly Supervised Medical Image Segmentation via Pixel-Level Contrastive Learning and Dynamic Mix AugmentationCode0
Novel Class Discovery for Ultra-Fine-Grained Visual CategorizationCode1
HC^2L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding0
Show:102550
← PrevPage 74 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified