SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 1010110150 of 10580 papers

TitleStatusHype
Neural Discrete Representation LearningCode2
Context-Aware Smoothing for Neural Machine Translation0
On Modeling Sense Relatedness in Multi-prototype Word Embedding0
Implicit Syntactic Features for Target-dependent Sentiment Analysis0
Information Bottleneck Inspired Method For Chat Text Segmentation0
Fixing a Broken ELBOCode0
Common Representation Learning Using Step-based Correlation Multi-Modal CNNCode0
Eigenoption Discovery through the Deep Successor RepresentationCode1
Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding0
Inductive Representation Learning in Large Attributed Graphs0
Improving One-Shot Learning through Fusing Side Information0
Learning Deep Context-aware Features over Body and Latent Parts for Person Re-identification0
Graph Embedding with Rich Information through Heterogeneous Network0
Information-Theoretic Representation Learning for Positive-Unlabeled Classification0
CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning0
Residual Connections Encourage Iterative Inference0
DisSent: Sentence Representation Learning from Explicit Discourse RelationsCode0
Function space analysis of deep learning representation layers0
Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vecCode0
Reconstruction of Hidden Representation for Robust Feature Extraction0
RUM: network Representation learning throUgh Multi-level structural information preservation0
Linear-Time Sequence Classification using Restricted Boltzmann Machines0
How Much Chemistry Does a Deep Neural Network Need to Know to Make Accurate Predictions?Code0
Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning0
Attentive Convolution: Equipping CNNs with RNN-style Attention MechanismsCode0
Deep Determinantal Point Process for Large-Scale Multi-Label Classification0
Directionally Convolutional Networks for 3D Shape Segmentation0
Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval0
The Consciousness PriorCode0
MR Acquisition-Invariant Representation LearningCode0
Variational Memory Addressing in Generative ModelsCode0
EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning0
An Attention-based Collaboration Framework for Multi-View Network Representation LearningCode0
Learning to update Auto-associative Memory in Recurrent Neural Networks for Improving Sequence Memorization0
Limitations of Cross-Lingual Learning from Image Search0
Representation Learning on Graphs: Methods and Applications0
Unsupervised state representation learning with robotic priors: a robustness benchmark0
A Framework for Generalizing Graph-based Representation Learning Methods0
KBLRN : End-to-End Learning of Knowledge Base Representations with Latent, Relational, and Numerical FeaturesCode0
GLAD: Global-Local-Alignment Descriptor for Pedestrian Retrieval0
Une véritable approche _0 pour l'apprentissage de dictionnaire0
NiftyNet: a deep-learning platform for medical imagingCode1
Answering Visual-Relational Queries in Web-Extracted Knowledge GraphsCode0
Neural Networks Regularization Through Class-wise Invariant Representation LearningCode0
Cross-Lingual Word Representations: Induction and Evaluation0
Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification0
End-to-End Neural Relation Extraction with Global Optimization0
Sentiment Lexicon Construction with Representation Learning Based on Hierarchical Sentiment SupervisionCode0
Fine-grained Visual-textual Representation LearningCode0
Representation Learning by Learning to CountCode0
Show:102550
← PrevPage 203 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified