SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 110 of 6661 papers

TitleStatusHype
HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals0
Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management0
SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation0
SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts0
Similarity-Guided Diffusion for Contrastive Sequential Recommendation0
LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation0
Latent Space Consistency for Sparse-View CT Reconstruction0
Self-supervised pretraining of vision transformers for animal behavioral analysis and neural encoding0
RadiomicsRetrieval: A Customizable Framework for Medical Image Retrieval Using Radiomics FeaturesCode1
NLGCL: Naturally Existing Neighbor Layers Graph Contrastive Learning for RecommendationCode1
Show:102550
← PrevPage 1 of 667Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified