SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 56015650 of 6661 papers

TitleStatusHype
Improving Micro-video Recommendation via Contrastive Multiple InterestsCode0
CREATER: CTR-driven Advertising Text Generation with Controlled Pre-Training and Contrastive Fine-Tuning0
Relation Extraction with Weighted Contrastive Pre-training on Distant SupervisionCode0
A two-steps approach to improve the performance of Android malware detectors0
Attention-aware contrastive learning for predicting T cell receptor-antigen binding specificity0
Dynamic Recognition of Speakers for Consent Management by Contrastive Embedding Replay0
Toward a Geometrical Understanding of Self-supervised Contrastive Learning0
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning0
Arithmetic-Based Pretraining -- Improving Numeracy of Pretrained Language ModelsCode0
Embodied vision for learning object representations0
SimCPSR: Simple Contrastive Learning for Paper Submission Recommendation SystemCode0
Pre-trained Language Models as Re-Annotators0
A simple framework for contrastive learning phases of matter0
Hyperspectral Image Classification With Contrastive Graph Convolutional NetworkCode0
Simple Contrastive Graph Clustering0
Deep Graph Clustering via Mutual Information Maximization and Mixture Model0
Transformer-based Cross-Modal Recipe Embeddings with Large Batch Training0
CoDo: Contrastive Learning with Downstream Background Invariance for Detection0
Reconstruction Enhanced Multi-View Contrastive Learning for Anomaly Detection on Attributed Networks0
Model-Contrastive Learning for Backdoor DefenseCode0
Visual Encoding and Debiasing for CTR Prediction0
A Closer Look at Few-shot Image Generation0
Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding0
KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering0
Relational Representation Learning in Visually-Rich Documents0
Explicit View-labels Matter: A Multifacet Complementarity Study of Multi-view Clustering0
CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training0
Lexical Knowledge Internalization for Neural Dialog GenerationCode0
Analysing the Robustness of Dual Encoders for Dense Retrieval Against MisspellingsCode0
Do More Negative Samples Necessarily Hurt in Contrastive Learning?0
i-Code: An Integrative and Composable Multimodal Learning Framework0
FastGCL: Fast Self-Supervised Learning on Graphs via Contrastive Neighborhood Aggregation0
Attention-wise masked graph contrastive learning for predicting molecular property0
A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space0
KNN-Contrastive Learning for Out-of-Domain Intent Classification0
Contrastive Learning-Enhanced Nearest Neighbor Mechanism for Multi-Label Text Classification0
Cross-Lingual Contrastive Learning for Fine-Grained Entity Typing for Low-Resource LanguagesCode0
GL-CLeF: A Global–Local Contrastive Learning Framework for Cross-lingual Spoken Language UnderstandingCode0
Controlled Text Generation Using Dictionary Prior in Variational Autoencoders0
When does CLIP generalize better than unimodal models? When judging human-centric concepts0
Mitigating the Inconsistency Between Word Saliency and Model Confidence with Pathological Contrastive Training0
Mitigating Contradictions in Dialogue Based on Contrastive Learning0
Syntax-guided Contrastive Learning for Pre-trained Language Model0
RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining0
NYCU_TWD@LT-EDI-ACL2022: Ensemble Models with VADER and Contrastive Learning for Detecting Signs of Depression from Social Media0
UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog0
Utilizing Cross-Modal Contrastive Learning to Improve Item Categorization BERT Model0
Probing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders0
Loss Function Entropy Regularization for Diverse Decision Boundaries0
Heterogeneous Graph Neural Networks using Self-supervised Reciprocally Contrastive Learning0
Show:102550
← PrevPage 113 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified