SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 17511800 of 6661 papers

TitleStatusHype
Towards Ontology-Enhanced Representation Learning for Large Language ModelsCode0
Multi-Label Guided Soft Contrastive Learning for Efficient Earth Observation PretrainingCode1
May the Dance be with You: Dance Generation Framework for Non-Humanoids0
PLA4D: Pixel-Level Alignments for Text-to-4D Gaussian Splatting0
Relation Modeling and Distillation for Learning with Noisy Labels0
Video-Language Critic: Transferable Reward Functions for Language-Conditioned RoboticsCode0
CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive LearningCode1
Learning Human-Aligned Representations with Contrastive Learning and Generative Similarity0
Multi-stage Retrieve and Re-rank Model for Automatic Medical Coding Recommendation0
Encoding Hierarchical Schema via Concept Flow for Multifaceted Ideology DetectionCode0
Supervised Contrastive Learning for Snapshot Spectral Imaging Face Anti-Spoofing0
Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification0
Aligning in a Compact Space: Contrastive Knowledge Distillation between Heterogeneous Architectures0
MM-Mixing: Multi-Modal Mixing Alignment for 3D Understanding0
LDMol: Text-to-Molecule Diffusion Model with Structurally Informative Latent SpaceCode1
SleepFM: Multi-modal Representation Learning for Sleep Across Brain Activity, ECG and Respiratory SignalsCode2
OV-DQUO: Open-Vocabulary DETR with Denoising Text Query Training and Open-World Unknown Objects SupervisionCode1
Enhancing Emotion Recognition in Conversation through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning0
Bridging Mini-Batch and Asymptotic Analysis in Contrastive Learning: From InfoNCE to Kernel-Based LossesCode1
Modeling Dynamic Topics in Chain-Free Fashion by Evolution-Tracking Contrastive Learning and Unassociated Word ExclusionCode1
Relational Self-supervised Distillation with Compact Descriptors for Image Copy DetectionCode0
SSLChange: A Self-supervised Change Detection Framework Based on Domain AdaptationCode1
On the Sequence Evaluation based on Stochastic Processes0
A Vlogger-augmented Graph Neural Network Model for Micro-video RecommendationCode0
Finding Shared Decodable Concepts and their Negations in the Brain0
CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at ScaleCode1
Your decision path does matter in pre-training industrial recommenders with multi-source behaviors0
Part123: Part-aware 3D Reconstruction from a Single-view Image0
ContrastAlign: Toward Robust BEV Feature Alignment via Contrastive Learning for Multi-Modal 3D Object Detection0
Unsupervised Generative Feature Transformation via Graph Contrastive Pre-training and Multi-objective Fine-tuning0
Automatically Generating Numerous Context-Driven SFT Data for LLMs across Diverse GranularityCode1
Probabilistic Contrastive Learning with Explicit Concentration on the Hypersphere0
Paths of A Million People: Extracting Life Trajectories from WikipediaCode0
Improving Multi-lingual Alignment Through Soft Contrastive LearningCode0
A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation0
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack0
Uncovering LLM-Generated Code: A Zero-Shot Synthetic Code Detector via Code Rewriting0
Negative as Positive: Enhancing Out-of-distribution Generalization for Graph Contrastive Learning0
USD: Unsupervised Soft Contrastive Learning for Fault Detection in Multivariate Time SeriesCode1
SLIDE: A Framework Integrating Small and Large Language Models for Open-Domain Dialogues EvaluationCode0
Self-Contrastive Weakly Supervised Learning Framework for Prognostic Prediction Using Whole Slide ImagesCode0
NuwaTS: a Foundation Model Mending Every Incomplete Time Series0
SATSense: Multi-Satellite Collaborative Framework for Spectrum Sensing0
BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detection0
Rethinking Class-Incremental Learning from a Dynamic Imbalanced Learning PerspectiveCode0
ProtFAD: Introducing function-aware domains as implicit modality towards protein function predictionCode0
Pre-Trained Vision-Language Models as Partial Annotators0
Improved Canonicalization for Model Agnostic EquivarianceCode2
Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer ModelsCode0
Harmony: A Joint Self-Supervised and Weakly-Supervised Framework for Learning General Purpose Visual RepresentationsCode0
Show:102550
← PrevPage 36 of 134Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified