SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 40514100 of 10580 papers

TitleStatusHype
Jointly Visual- and Semantic-Aware Graph Memory Networks for Temporal Sentence Localization in Videos0
Iterative Circuit Repair Against Formal SpecificationsCode0
Multi-Task Self-Supervised Time-Series Representation Learning0
AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context Processing for Representation Learning of Giga-pixel ImagesCode1
Asymmetric Learning for Graph Neural Network based Link Prediction0
Can representation learning for multimodal image registration be improved by supervision of intermediate layers?0
Mosaic Representation Learning for Self-supervised Visual Pre-trainingCode1
Representation Disentaglement via Regularization by Causal Identification0
Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors0
RoPAWS: Robust Semi-supervised Representation Learning from Uncurated DataCode2
Weighted Sampling for Masked Language Modeling0
BrainBERT: Self-supervised representation learning for intracranial recordingsCode1
Structured Pruning of Self-Supervised Pre-trained Models for Speech Recognition and UnderstandingCode1
A Dataset for Learning Graph Representations to Predict Customer Returns in Fashion Retail0
Semantic-aware Node Synthesis for Imbalanced Heterogeneous Information Networks0
A low latency attention module for streaming self-supervised speech representation learningCode0
Knowledge-enhanced Visual-Language Pre-training on Chest Radiology ImagesCode1
Internet Explorer: Targeted Representation Learning on the Open WebCode1
Joint-MAE: 2D-3D Joint Masked Autoencoders for 3D Point Cloud Pre-training0
DeepSeq: Deep Sequential Circuit Learning0
LODE: Locally Conditioned Eikonal Implicit Scene Completion from Sparse LiDARCode1
Improving Representational Continuity via Continued PretrainingCode0
Efficient fair PCA for fair representation learningCode0
MCoCo: Multi-level Consistency Collaborative Multi-view Clustering0
Generative Models for 3D Point CloudsCode0
Partial Label Learning for Emotion Recognition from EEGCode1
Knowledge-infused Contrastive Learning for Urban Imagery-based Socioeconomic PredictionCode1
T-Phenotype: Discovering Phenotypes of Predictive Temporal Patterns in Disease ProgressionCode0
Retrieved Sequence Augmentation for Protein Representation LearningCode1
Language-Driven Representation Learning for RoboticsCode2
Amortised Invariance Learning for Contrastive Self-SupervisionCode0
Generalization Analysis for Contrastive Representation Learning0
Catch You and I Can: Revealing Source Voiceprint Against Voice Conversion0
FTM: A Frame-level Timeline Modeling Method for Temporal Graph Representation LearningCode1
Learning Visual Representations via Language-Guided SamplingCode1
A Constraints Fusion-induced Symmetric Nonnegative Matrix Factorization Approach for Community Detection0
Improving Adaptive Conformal Prediction Using Self-Supervised LearningCode1
Contrastive Representation Learning for Acoustic Parameter Estimation0
GTRL: An Entity Group-Aware Temporal Knowledge Graph Representation Learning MethodCode0
Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks0
Learning Dynamic Graph Embeddings with Neural Controlled Differential Equations0
A critical look at the evaluation of GNNs under heterophily: Are we really making progress?Code1
HINormer: Representation Learning On Heterogeneous Information Networks with Graph TransformerCode1
Saliency Guided Contrastive Learning on Scene Images0
Steerable Equivariant Representation Learning0
Edgeformers: Graph-Empowered Transformers for Representation Learning on Textual-Edge NetworksCode1
A General-Purpose Transferable Predictor for Neural Architecture Search0
Link Prediction on Latent Heterogeneous GraphsCode1
Scalable Infomin LearningCode1
Learning Language Representations with Logical Inductive Bias0
Show:102550
← PrevPage 82 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified