SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 226250 of 6661 papers

TitleStatusHype
CODER: Knowledge infused cross-lingual medical term embedding for term normalizationCode1
CoCon: Cooperative-Contrastive LearningCode1
CoDi: Co-evolving Contrastive Diffusion Models for Mixed-type Tabular SynthesisCode1
COLO: A Contrastive Learning based Re-ranking Framework for One-Stage SummarizationCode1
Composed Image Retrieval using Contrastive Learning and Task-oriented CLIP-based FeaturesCode1
COCOA: Cross Modality Contrastive Learning for Sensor DataCode1
A Language Model based Framework for New Concept Placement in OntologiesCode1
Automatic Biomedical Term Clustering by Learning Fine-grained Term RepresentationsCode1
CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Data Limitation With Contrastive LearningCode1
Automated Spatio-Temporal Graph Contrastive LearningCode1
ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical ContrastCode1
A latent space for unsupervised MR image quality control via artifact assessmentCode1
Actionness Inconsistency-guided Contrastive Learning for Weakly-supervised Temporal Action LocalizationCode1
COARSE3D: Class-Prototypes for Contrastive Learning in Weakly-Supervised 3D Point Cloud SegmentationCode1
Learning the Unlearned: Mitigating Feature Suppression in Contrastive LearningCode1
CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image FusionCode1
CoCoNets: Continuous Contrastive 3D Scene RepresentationsCode1
AIRCHITECT v2: Learning the Hardware Accelerator Design Space through Unified RepresentationsCode1
Bag of Instances Aggregation Boosts Self-supervised DistillationCode1
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean LabelCode1
BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive LearningCode1
Aligning Language Models with Human Preferences via a Bayesian ApproachCode1
Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch MiningCode1
Aligning Pretraining for Detection via Object-Level Contrastive LearningCode1
Automatically Generating Numerous Context-Driven SFT Data for LLMs across Diverse GranularityCode1
Show:102550
← PrevPage 10 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified