SOTAVerified

Contrastive Learning

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Papers

Showing 28512875 of 6661 papers

TitleStatusHype
Contextuality Helps Representation Learning for Generalized Category DiscoveryCode0
Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP ModelsCode0
Enhancing Homophily-Heterophily Separation: Relation-Aware Learning in Heterogeneous GraphsCode0
PSP: Pre-Training and Structure Prompt Tuning for Graph Neural NetworksCode0
Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for RecommendationCode0
Joint Masked Reconstruction and Contrastive Learning for Mining Interactions Between ProteinsCode0
JTCSE: Joint Tensor-Modulus Constraints and Cross-Attention for Unsupervised Contrastive Learning of Sentence EmbeddingsCode0
Topology Only Pre-Training: Towards Generalised Multi-Domain Graph ModelsCode0
IRConStyle: Image Restoration Framework Using Contrastive Learning and Style TransferCode0
HaSa: Hardness and Structure-Aware Contrastive Knowledge Graph EmbeddingCode0
IPCL: Iterative Pseudo-Supervised Contrastive Learning to Improve Self-Supervised Feature RepresentationCode0
JCSE: Contrastive Learning of Japanese Sentence Embeddings and Its ApplicationsCode0
Intra-video Positive Pairs in Self-Supervised Learning for UltrasoundCode0
Intra- and Inter-modal Context Interaction Modeling for Conversational Speech SynthesisCode0
Harmony: A Joint Self-Supervised and Weakly-Supervised Framework for Learning General Purpose Visual RepresentationsCode0
Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate RainsCode0
Interventional Video Grounding with Dual Contrastive LearningCode0
Into the Unknown: Applying Inductive Spatial-Semantic Location Embeddings for Predicting Individuals' Mobility Beyond Visited PlacesCode0
Intrinsic and Extrinsic Factor Disentanglement for Recommendation in Various Context ScenariosCode0
Enhancing Contrastive Learning Inspired by the Philosophy of "The Blind Men and the Elephant"Code0
Adaptive Hypergraph Network for Trust PredictionCode0
Enhancing Contrastive Learning-based Electrocardiogram Pretrained Model with Patient Memory QueueCode0
Provable Ordering and Continuity in Vision-Language Pretraining for Generalizable Embodied AgentsCode0
JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model EditsCode0
Interactive Dimensionality Reduction for Comparative AnalysisCode0
Show:102550
← PrevPage 115 of 267Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1ResNet50ImageNet Top-1 Accuracy73.6Unverified
2ResNet50ImageNet Top-1 Accuracy73Unverified
3ResNet50ImageNet Top-1 Accuracy71.1Unverified
4ResNet50ImageNet Top-1 Accuracy69.3Unverified
5ResNet50 (v2)ImageNet Top-1 Accuracy67.6Unverified
6ResNet50 (v2)ImageNet Top-1 Accuracy63.8Unverified
7ResNet50ImageNet Top-1 Accuracy63.6Unverified
8ResNet50ImageNet Top-1 Accuracy61.5Unverified
9ResNet50ImageNet Top-1 Accuracy61.5Unverified
10ResNet50 (4×)ImageNet Top-1 Accuracy61.3Unverified
#ModelMetricClaimedVerifiedStatus
110..5sec1Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)84.77Unverified
#ModelMetricClaimedVerifiedStatus
1IPCL (ResNet18)Accuracy (Top-1)85.55Unverified