SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 96019650 of 10580 papers

TitleStatusHype
Hierarchically Clustered Representation Learning0
On Learning Invariant Representation for Domain AdaptationCode0
Revisiting Self-Supervised Visual Representation LearningCode0
BioBERT: a pre-trained biomedical language representation model for biomedical text miningCode1
Unsupervised speech representation learning using WaveNet autoencodersCode1
Decoupling feature extraction from policy learning: assessing benefits of state representation learning in goal based roboticsCode0
Hypergraph Convolution and Hypergraph Attention0
Computer Vision and Metrics Learning for Hypothesis Testing: An Application of Q-Q Plot for Normality Test0
Loss Landscapes of Regularized Linear AutoencodersCode0
Learning sound representations using trainable COPE feature extractors0
DLocRL: A Deep Learning Pipeline for Fine-Grained Location Recognition and Linking in Tweets0
CommunityGAN: Community Detection with Generative Adversarial NetsCode0
Deep Representation Learning Characterized by Inter-class Separation for Image Clustering0
Learning Vertex Representations for Bipartite NetworksCode0
Representation Learning on Graphs: A Reinforcement Learning Application0
DeepCF: A Unified Framework of Representation Learning and Matching Function Learning in Recommender SystemCode1
AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than DataCode0
Predicting Diffusion Reach Probabilities via Representation Learning on Social NetworksCode0
Large-scale Collaborative Filtering with Product Embeddings0
Preventing Posterior Collapse with delta-VAEs0
Deep Semantic Multimodal Hashing Network for Scalable Image-Text and Video-Text Retrievals0
Transfer Representation Learning with TSK Fuzzy System0
Tencent ML-Images: A Large-Scale Multi-Label Image Database for Visual Representation LearningCode2
Deep Network Embedding for Graph Representation Learning in Signed NetworksCode0
MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders0
Bilinear Supervised Hashing Based on 2D Image Features0
Efficient Representation Learning Using Random Walks for Dynamic GraphsCode0
Volumetric Convolution: Automatic Representation Learning in Unit Ball0
Attribute-Aware Attention Model for Fine-grained Representation LearningCode0
Morphological Network: How Far Can We Go with Morphological Neurons?0
Learning Spatial Common Sense with Geometry-Aware Recurrent Networks0
Cross-language Citation Recommendation via Hierarchical Representation Learning on Heterogeneous GraphCode0
State representation learning with recurrent capsule networks0
Knowledge Representation Learning: A Quantitative ReviewCode2
Improving Generalization of Deep Neural Networks by Leveraging Margin DistributionCode0
Topological Constraints on Homeomorphic Auto-Encoding0
Uncertainty Autoencoders: Learning Compressed Representations via Variational Information Maximization0
TransNFCM: Translation-Based Neural Fashion Compatibility Modeling0
Deep Representation Learning for Clustering of Health Tweets0
Learning finite-dimensional coding schemes with nonlinear reconstruction maps0
Enhancing Discrete Choice Models with Representation LearningCode1
Dynamic Graph Representation Learning via Self-Attention NetworksCode0
COSINE: Compressive Network Embedding on Large-scale Information Networks0
Learning Representations from Dendrograms0
RNNs Implicitly Implement Tensor Product Representations0
CNN based Multi-Instance Multi-Task Learning for Syndrome Differentiation of Diabetic Patients0
Clustering-Oriented Representation Learning with Attractive-Repulsive LossCode0
Machine Learning for Molecular Dynamics on Long Timescales0
Unsupervised Anomaly Detection in Energy Time Series Data Using Variational Recurrent Autoencoders with Attention0
Variational Autoencoders Pursue PCA Directions (by Accident)0
Show:102550
← PrevPage 193 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified