SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 12511300 of 10580 papers

TitleStatusHype
Decoupling Global and Local Representations via Invertible Generative FlowsCode1
DeepCF: A Unified Framework of Representation Learning and Matching Function Learning in Recommender SystemCode1
Deconvolutional Paragraph Representation LearningCode1
Effectiveness of self-supervised pre-training for speech recognitionCode1
DECAF: Deep Extreme Classification with Label FeaturesCode1
CLARA: Multilingual Contrastive Learning for Audio Representation AcquisitionCode1
A picture of the space of typical learnable tasksCode1
DeCoAR 2.0: Deep Contextualized Acoustic Representations with Vector QuantizationCode1
Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial RobustnessCode1
Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment AnalysisCode1
DA-TransUNet: Integrating Spatial and Channel Dual Attention with Transformer U-Net for Medical Image SegmentationCode1
Efficient Representation Learning for Healthcare with Cross-Architectural Self-SupervisionCode1
Class-Imbalanced Learning on Graphs: A SurveyCode1
EH-MAM: Easy-to-Hard Masked Acoustic Modeling for Self-Supervised Speech Representation LearningCode1
Eliminating Sentiment Bias for Aspect-Level Sentiment Classification with Unsupervised Opinion ExtractionCode1
Debiased Contrastive LearningCode1
TransGNN: Harnessing the Collaborative Power of Transformers and Graph Neural Networks for Recommender SystemsCode1
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image EncodersCode1
Empowering Graph Representation Learning with Test-Time Graph TransformationCode1
EnCodecMAE: Leveraging neural codecs for universal audio representation learningCode1
A critical look at the evaluation of GNNs under heterophily: Are we really making progress?Code1
Endowing Protein Language Models with Structural KnowledgeCode1
Data Augmenting Contrastive Learning of Speech Representations in the Time DomainCode1
E(n) Equivariant Graph Neural NetworksCode1
Enhancing Dialogue Generation via Dynamic Graph Knowledge AggregationCode1
Enhancing Graph Representation Learning with Localized Topological FeaturesCode1
ProGCL: Rethinking Hard Negative Mining in Graph Contrastive LearningCode1
CAR: Class-aware Regularizations for Semantic SegmentationCode1
Decoupled Contrastive Learning for Long-Tailed RecognitionCode1
CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-TuningCode1
CARD: Semantic Segmentation with Efficient Class-Aware Regularized DecoderCode1
Clustering-Aware Negative Sampling for Unsupervised Sentence RepresentationCode1
Generalized Clustering and Multi-Manifold Learning with Geometric Structure PreservationCode1
CyCLIP: Cyclic Contrastive Language-Image PretrainingCode1
CARLA: Self-supervised Contrastive Representation Learning for Time Series Anomaly DetectionCode1
CARL: A Benchmark for Contextual and Adaptive Reinforcement LearningCode1
Curriculum-Meta Learning for Order-Robust Continual Relation ExtractionCode1
EVA-CLIP: Improved Training Techniques for CLIP at ScaleCode1
Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging TasksCode1
Evaluating Modules in Graph Contrastive LearningCode1
Evaluating Self-Supervised Learning via Risk DecompositionCode1
Cascaded deep monocular 3D human pose estimation with evolutionary training dataCode1
A Proposal of Multi-Layer Perceptron with Graph Gating Unit for Graph Representation Learning and its Application to Surrogate Model for FEMCode1
CASPR: Customer Activity Sequence-based Prediction and RepresentationCode1
CLIP-Lite: Information Efficient Visual Representation Learning with Language SupervisionCode1
CAST: Character labeling in Animation using Self-supervision by TrackingCode1
Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph ClassificationCode1
Expectation-Maximization Contrastive Learning for Compact Video-and-Language RepresentationsCode1
DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time SeriesCode1
Curriculum DeepSDFCode1
Show:102550
← PrevPage 26 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified