SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 56015650 of 6661 papers

TitleStatusHype
MassNet: A Deep Learning Approach for Body Weight Extraction from A Single Pressure ImageCode0
Contrastive Multi-document Question GenerationCode0
CMIP-CIL: A Cross-Modal Benchmark for Image-Point Class Incremental LearningCode0
Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-trainingCode0
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual RepresentationsCode0
CM3AE: A Unified RGB Frame and Event-Voxel/-Frame Pre-training FrameworkCode0
Description-Enhanced Label Embedding Contrastive Learning for Text ClassificationCode0
Masking Improves Contrastive Self-Supervised Learning for ConvNets, and Saliency Tells You WhereCode0
Multiview graph dual-attention deep learning and contrastive learning for multi-criteria recommender systemsCode0
Demonstrating and Reducing Shortcuts in Vision-Language Representation LearningCode0
Mask-informed Deep Contrastive Incomplete Multi-view ClusteringCode0
Semi-Supervised Semantic Segmentation with Cross Teacher TrainingCode0
Mask-Guided Contrastive Attention Model for Person Re-IdentificationCode0
Multi-level Relation Learning for Cross-domain Few-shot Hyperspectral Image ClassificationCode0
Multi-view self-supervised learning for multivariate variable-channel time seriesCode0
Masked Student Dataset of ExpressionsCode0
Masked Collaborative Contrast for Weakly Supervised Semantic SegmentationCode0
Semi-weakly Supervised Contrastive Representation Learning for Retinal Fundus ImagesCode0
MarsEclipse at SemEval-2023 Task 3: Multi-Lingual and Multi-Label Framing Detection with Contrastive LearningCode0
MAPS: Motivation-Aware Personalized Search via LLM-Driven Consultation AlignmentCode0
DELTA: Decoupling Long-Tailed Online Continual LearningCode0
AuralSAM2: Enabling SAM2 Hear Through Pyramid Audio-Visual Feature PromptingCode0
MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive LearningCode0
DELAN: Dual-Level Alignment for Vision-and-Language Navigation by Cross-Modal Contrastive LearningCode0
Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text ClassificationCode0
Unsupervised Contrastive Analysis for Salient Pattern Detection using Conditional Diffusion ModelsCode0
ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic FeaturesCode0
ManiNeg: Manifestation-guided Multimodal Pretraining for Mammography ClassificationCode0
Weighted Contrastive HashingCode0
Manifold Contrastive Learning with Variational Lie Group OperatorsCode0
MVMR: A New Framework for Evaluating Faithfulness of Video Moment Retrieval against Multiple DistractorsCode0
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillationCode0
Making the Most of Text Semantics to Improve Biomedical Vision--Language ProcessingCode0
MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal RepresentationsCode0
Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt TuningCode0
Making Pre-trained Language Models Better Continual Few-Shot Relation ExtractorsCode0
MyriadAL: Active Few Shot Learning for HistopathologyCode0
NaFM: Pre-training a Foundation Model for Small-Molecule Natural ProductsCode0
Named Entity Recognition Under Domain Shift via Metric Learning for Life SciencesCode0
NanoHTNet: Nano Human Topology Network for Efficient 3D Human Pose EstimationCode0
Narrowing the Gap between Supervised and Unsupervised Sentence Representation Learning with Large Language ModelCode0
NASiam: Efficient Representation Learning using Neural Architecture Search for Siamese NetworksCode0
Sentence Embeddings using Supervised Contrastive LearningCode0
Towards Cross-Modal Text-Molecule Retrieval with Better Modality AlignmentCode0
NearbyPatchCL: Leveraging Nearby Patches for Self-Supervised Patch-Level Multi-Class Classification in Whole-Slide ImagesCode0
Machine Unlearning in Hyperbolic vs. Euclidean Multimodal Contrastive Learning: Adapting Alignment Calibration to MERUCode0
Sentence Representations via Gaussian EmbeddingCode0
MA-AVT: Modality Alignment for Parameter-Efficient Audio-Visual TransformersCode0
Nearshore Underwater Target Detection Meets UAV-borne Hyperspectral Remote Sensing: A Novel Hybrid-level Contrastive Learning Framework and Benchmark DatasetCode0
NECOMIMI: Neural-Cognitive Multimodal EEG-informed Image Generation with Diffusion ModelsCode0
Show:102550
← PrevPage 113 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified