SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 11511200 of 6661 papers

TitleStatusHype
FELLAS: Enhancing Federated Sequential Recommendation with LLM as External Services0
Improving Object Detection via Local-global Contrastive Learning0
Contrastive Learning to Improve Retrieval for Real-world Fact Checking0
SimO Loss: Anchor-Free Contrastive Loss for Fine-Grained Supervised Contrastive Learning0
WTCL-Dehaze: Rethinking Real-world Image Dehazing via Wavelet Transform and Contrastive Learning0
Inner-Probe: Discovering Copyright-related Data Generation in LLM Architecture0
Multi-Tiered Self-Contrastive Learning for Medical Microwave Radiometry (MWR) Breast Cancer DetectionCode0
Enhancement of Dysarthric Speech Reconstruction by Contrastive Learning0
Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss FunctionCode0
CUDLE: Learning Under Label Scarcity to Detect Cannabis Use in Uncontrolled Environments0
Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding with LLMs0
Improving Node Representation by Boosting Target-Aware Contrastive Loss0
CoLLAP: Contrastive Long-form Language-Audio Pretraining with Musical Temporal Structure Augmentation0
Channel-aware Contrastive Conditional Diffusion for Multivariate Probabilistic Time Series ForecastingCode0
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual RepresentationsCode0
Contextual Document Embeddings0
FARM: Functional Group-Aware Representations for Small Molecules0
Automated Knowledge Concept Annotation and Question Representation Learning for Knowledge TracingCode0
CktGen: Specification-Conditioned Analog Circuit Generation0
ScVLM: Enhancing Vision-Language Model for Safety-Critical Event UnderstandingCode0
CXPMRG-Bench: Pre-training and Benchmarking for X-ray Medical Report Generation on CheXpert Plus Dataset0
Contrastive Abstraction for Reinforcement Learning0
Domain Aware Multi-Task Pretraining of 3D Swin Transformer for T1-weighted Brain MRICode0
NECOMIMI: Neural-Cognitive Multimodal EEG-informed Image Generation with Diffusion ModelsCode0
RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language ModelsCode2
Decoding the Echoes of Vision from fMRI: Memory Disentangling for Past Semantic InformationCode0
Enhancing GANs with Contrastive Learning-Based Multistage Progressive Finetuning SNN and RL-Based External Optimization0
Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats0
Contrastive ground-level image and remote sensing pre-training improves representation learning for natural world imagery0
TwinCL: A Twin Graph Contrastive Learning Model for Collaborative FilteringCode0
Embed and Emulate: Contrastive representations for simulation-based inference0
Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via Probabilistic Prototypical Pixel ContrastCode0
UniEmoX: Cross-modal Semantic-Guided Large-Scale Pretraining for Universal Scene Emotion PerceptionCode0
You Only Speak Once to See0
Understanding the Benefits of SimCLR Pre-Training in Two-Layer Convolutional Neural Networks0
Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal ClassificationCode0
Robotic-CLIP: Fine-tuning CLIP on Action Data for Robotic Applications0
LoopSR: Looping Sim-and-Real for Lifelong Policy Adaptation of Legged Robots0
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting Contrastive Learning for Text Classification0
CleanerCLIP: Fine-grained Counterfactual Semantic Augmentation for Backdoor Defense in Contrastive Learning0
Self-supervised Pretraining for Cardiovascular Magnetic Resonance Cine SegmentationCode0
Domain-Independent Automatic Generation of Descriptive Texts for Time-Series Data0
Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure LearningCode1
Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI GenerationCode1
DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare DataCode1
Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image EnhancementCode1
DIAL: Dense Image-text ALignment for Weakly Supervised Semantic Segmentation0
Enhanced Unsupervised Image-to-Image Translation Using Contrastive Learning and Histogram of Oriented Gradients0
Patch-Based Contrastive Learning and Memory Consolidation for Online Unsupervised Continual LearningCode0
PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings0
Show:102550
← PrevPage 24 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified