SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 41014150 of 6661 papers

TitleStatusHype
Noise Is Also Useful: Negative Correlation-Steered Latent Contrastive Learning0
Noise-Robust Contrastive Learning0
No More Shortcuts: Realizing the Potential of Temporal Self-Supervision0
Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices0
No One Representation to Rule Them All: Overlapping Features of Training Methods0
Not All Documents Are What You Need for Extracting Instruction Tuning Data0
Not All Regions are Worthy to be Distilled: Region-aware Knowledge Distillation Towards Efficient Image-to-Image Translation0
NoteLLM: A Retrievable Large Language Model for Note Recommendation0
Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning0
Novel Class Discovery for Open Set Raga Classification0
Novelty Detection via Contrastive Learning with Negative Data Augmentation0
Novelty Detection with Rotated Contrastive Predictive Coding0
NukesFormers: Unpaired Hyperspectral Image Generation with Non-Uniform Domain Alignment0
NuwaTS: a Foundation Model Mending Every Incomplete Time Series0
NV-Retriever: Improving text embedding models with effective hard-negative mining0
NYCU_TWD@LT-EDI-ACL2022: Ensemble Models with VADER and Contrastive Learning for Detecting Signs of Depression from Social Media0
O1 Embedder: Let Retrievers Think Before Action0
Object2Scene: Putting Objects in Context for Open-Vocabulary 3D Detection0
Self-Supervised Object Goal Navigation with In-Situ Finetuning0
OCCO: LVM-guided Infrared and Visible Image Fusion Framework based on Object-aware and Contextual COntrastive Learning0
OCL: Ordinal Contrastive Learning for Imputating Features with Progressive Labels0
OCTCube-M: A 3D multimodal optical coherence tomography foundation model for retinal and systemic diseases with cross-cohort and cross-device validation0
OFAR: A Multimodal Evidence Retrieval Framework for Illegal Live-streaming Identification0
OmniSage: Large Scale, Multi-Entity Heterogeneous Graph Representation Learning0
On Bottleneck Features for Text-Dependent Speaker Verification Using X-vectors0
On Class Separability Pitfalls In Audio-Text Contrastive Zero-Shot Learning0
One-Bit Active Query With Contrastive Pairs0
On Finite-Sample Identifiability of Contrastive Learning-Based Nonlinear Independent Component Analysis0
On Learning Universal Representations Across Languages0
Metric Compatible Training for Online Backfilling in Large-Scale Retrieval0
Online Continual Learning with Contrastive Vision Transformer0
Online Object Representations with Contrastive Learning0
Online pre-training with long-form videos0
On Mutual Information in Contrastive Learning for Visual Representations0
On Negative Sampling for Audio-Visual Contrastive Learning from Movies0
On Negative Sampling for Contrastive Audio-Text Retrieval0
On Self-Supervised Image Representations for GAN Evaluation0
On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval0
On the Adversarial Robustness of Graph Contrastive Learning Methods0
On the Comparison between Multi-modal and Single-modal Contrastive Learning0
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks0
On the Effectiveness of Sampled Softmax Loss for Item Recommendation0
On the Effect of Data-Augmentation on Local Embedding Properties in the Contrastive Learning of Music Audio Representations0
On the Importance of Contrastive Loss in Multimodal Learning0
On the Informativeness of Supervision Signals0
On the Marginal Benefit of Active Learning: Does Self-Supervision Eat Its Cake?0
On the Memorization Properties of Contrastive Learning0
On the Provable Advantage of Unsupervised Pretraining0
On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model, Data, and Training0
On the Robustness of Pretraining and Self-Supervision for a Deep Learning-based Analysis of Diabetic Retinopathy0
Show:102550
← PrevPage 83 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified