SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 39263950 of 6661 papers

TitleStatusHype
BOURNE: Bootstrapped Self-supervised Learning Framework for Unified Graph Anomaly Detection0
Brain-Cognition Fingerprinting via Graph-GCCA with Contrastive Learning0
BrainDreamer: Reasoning-Coherent and Controllable Image Generation from EEG Brain Signals via Language Guidance0
Brain Tissue Segmentation Across the Human Lifespan via Supervised Contrastive Learning0
Breaking the Bank with ChatGPT: Few-Shot Text Classification for Finance0
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack0
Breaking the Global North Stereotype: A Global South-centric Benchmark Dataset for Auditing and Mitigating Biases in Facial Recognition Systems0
Leveraging Medical Foundation Model Features in Graph Neural Network-Based Retrieval of Breast Histopathology Images0
Breast tumor classification based on self-supervised contrastive learning from ultrasound videos0
Bridge the Gap between Language models and Tabular Understanding0
Bridge the Gap between Supervised and Unsupervised Learning for Fine-Grained Classification0
Bridging Contrastive Learning and Domain Adaptation: Theoretical Perspective and Practical Application0
Bridging High-Quality Audio and Video via Language for Sound Effects Retrieval from Visual Queries0
Bridging Text and Image for Artist Style Transfer via Contrastive Learning0
Bridging the Emotional Semantic Gap via Multimodal Relevance Estimation0
Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling0
Bridging the Gap Between Semantic and User Preference Spaces for Multi-modal Music Representation Learning0
Bridging the Modality Gap: Dimension Information Alignment and Sparse Spatial Constraint for Image-Text Matching0
BRIDO: Bringing Democratic Order to Abstractive Summarization0
Brief Introduction to Contrastive Learning Pretext Tasks for Visual Representation0
Bringing CLIP to the Clinic: Dynamic Soft Labels and Negation-Aware Learning for Medical Analysis0
Buffer is All You Need: Defending Federated Learning against Backdoor Attacks under Non-iids via Buffering0
Building an Enhanced Autoregressive Document Retriever Leveraging Supervised Contrastive Learning0
Building Shortcuts between Distant Nodes with Biaffine Mapping for Graph Convolutional Networks0
Building Vision-Language Models on Solid Foundations with Masked Distillation0
Show:102550
← PrevPage 158 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified