SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 35013525 of 6661 papers

TitleStatusHype
Weakly-Supervised Text Instance Segmentation0
Weakly-Supervised Video Object Grounding via Causal Intervention0
Weak Supervision for Real World Graphs0
Weak Supervision with Arbitrary Single Frame for Micro- and Macro-expression Spotting0
Weak-to-Strong Compositional Learning from Generative Models for Language-based Object Detection0
WebGuard++:Interpretable Malicious URL Detection via Bidirectional Fusion of HTML Subgraphs and Multi-Scale Convolutional BERT0
WeedCLR: Weed Contrastive Learning through Visual Representations with Class-Optimized Loss in Long-Tailed Datasets0
Weighted KL-Divergence for Document Ranking Model Refinement0
Weighted Point Cloud Normal Estimation0
What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator0
Uncovering the Over-smoothing Challenge in Image Super-Resolution: Entropy-based Quantification and Contrastive Optimization0
What Makes for Good Representations for Contrastive Learning0
What Makes for Good Views for Contrastive Learning?0
What Remains of Visual Semantic Embeddings0
What Should Not Be Contrastive in Contrastive Learning0
Finding Shared Decodable Concepts and their Negations in the Brain0
What Time Tells Us? An Explorative Study of Time Awareness Learned from Static Images0
What to align in multimodal contrastive learning?0
When can we Approximate Wide Contrastive Models with Neural Tangent Kernels and Principal Component Analysis?0
When does CLIP generalize better than unimodal models? When judging human-centric concepts0
When Does Contrastive Visual Representation Learning Work?0
When Graph Contrastive Learning Backfires: Spectral Vulnerability and Defense in Recommendation0
When hard negative sampling meets supervised contrastive learning0
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression0
WildSAT: Learning Satellite Image Representations from Wildlife Observations0
Show:102550
← PrevPage 141 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified