SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 78017825 of 10580 papers

TitleStatusHype
On the Importance of Distraction-Robust Representations for Robot Learning0
Non-Markovian Predictive Coding For Planning In Latent Space0
Exploring representation learning for flexible few-shot tasks0
Collaborative Filtering with Smooth Reconstruction of the Preference Function0
AC-VAE: Learning Semantic Representation with VAE for Adaptive Clustering0
Leveraging affinity cycle consistency to isolate factors of variation in learned representations0
Learning disentangled representations with the Wasserstein Autoencoder0
Invariant Causal Representation Learning0
xERTE: Explainable Reasoning on Temporal Knowledge Graphs for Forecasting Future LinksCode1
Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic HumansCode2
Language-Mediated, Object-Centric Representation Learning0
Deep Graph Generators: A Survey0
Binary Graph Neural NetworksCode1
Enhancing Sindhi Word Segmentation using Subword Representation Learning and Position-aware Self-attention0
AU-Expression Knowledge Constrained Representation Learning for Facial Expression RecognitionCode1
AttrE2vec: Unsupervised Attributed Edge Representation Learning0
Hybrid Micro/Macro Level Convolution for Heterogeneous Graph LearningCode0
Universal Sentence Representation Learning with Conditional Masked Language Model0
BURT: BERT-inspired Universal Representation from Learning Meaningful Segment0
Signed Graph Diffusion Network0
Generalized Categorisation of Digital Pathology Whole Image Slides using Unsupervised LearningCode0
On self-supervised multi-modal representation learning: An application to Alzheimer's diseaseCode0
Self-supervised Pre-training with Hard Examples Improves Visual Representations0
Evolution Is All You Need: Phylogenetic Augmentation for Contrastive Learning0
Self-Supervised Multimodal Domino: in Search of Biomarkers for Alzheimer's DiseaseCode0
Show:102550
← PrevPage 313 of 424Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified