SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 66016650 of 10580 papers

TitleStatusHype
CDPS: Constrained DTW-Preserving Shapelets0
Residual Contrastive Learning: Unsupervised Representation Learning from Residuals0
Learning Controllable Elements Oriented Representations for Reinforcement Learning0
Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms0
Personalized PageRank meets Graph Attention Networks0
Iterative Bilinear Temporal-Spectral Fusion for Unsupervised Representation Learning in Time Series0
Differentiable Expectation-Maximization for Set Representation Learning0
Interest-based Item Representation Framework for Recommendation with Multi-Interests Capsule Network0
An object-centric sensitivity analysis of deep learning based instance segmentation0
One Stage Autoencoders for Multi-Domain Learning0
Towards simple time-to-event modeling: optimizing neural networks via rank regression0
Ancestral protein sequence reconstruction using a tree-structured Ornstein-Uhlenbeck variational autoencoder0
Patchwise Sparse Dictionary Learning from pre-trained Neural Network Activation Maps for Anomaly Detection in Images0
Surgical Prediction with Interpretable Latent Representation0
UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation LearningCode1
A Variance Reduction Method for Neural-based Divergence Estimation0
G^3: Representation Learning and Generation for Geometric Graphs0
Fine-grained Software Vulnerability Detection via Information Theory and Contrastive Learning0
Contrastive Label Disambiguation for Partial Label LearningCode1
Value-aware transformers for 1.5d data0
Federated Contrastive Representation Learning with Feature Fusion and Neighborhood Matching0
A Transferable General-Purpose Predictor for Neural Architecture Search0
Scalable Hierarchical Embeddings of Complex Networks0
Multi-Domain Self-Supervised Learning0
Mimicking Randomized Controlled Trials to Learn End-to-End Patient Representations through Self-Supervised Covariate Balancing for Causal Treatment Effect Estimation0
On the interventional consistency of autoencoders0
Learning Structure from the Ground up---Hierarchical Representation Learning by Chunking0
LEARNING PHONEME-LEVEL DISCRETE SPEECH REPRESENTATION WITH WORD-LEVEL SUPERVISION0
Spatio-temporal Disentangled representation learning for mobility prediction0
Learning Better Visual Representations for Weakly-Supervised Object Detection Using Natural Language Supervision0
Regularized Autoencoders for Isometric Representation Learning0
Spatiotemporal Representation Learning on Time Series with Dynamic Graph ODEs0
Informative Robust Causal Representation for Generalizable Deep Learning0
Information-Aware Time Series Meta-Contrastive Learning0
Implicit Bias of Projected Subgradient Method Gives Provable Robust Recovery of Subspaces of Unknown Codimension0
Debiasing Pretrained Text Encoders by Paying Attention to Paying Attention0
SimMER: Simple Maximization of Entropy and Rank for Self-supervised Representation Learning0
A Deep Latent Space Model for Directed Graph Representation Learning0
Offline Pre-trained Multi-Agent Decision Transformer0
Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture GenerationCode1
Shaping latent representations using Self-Organizing Maps with Relevance Learning0
FP-DETR: Detection Transformer Advanced by Fully Pre-training0
FEATURE-AUGMENTED HYPERGRAPH NEURAL NETWORKS0
Context-invariant, multi-variate time series representations0
SynCLR: A Synthesis Framework for Contrastive Learning of out-of-domain Speech Representations0
ESCo: Towards Provably Effective and Scalable Contrastive Representation Learning0
Environment Predictive Coding for Visual Navigation0
Modeling label correlations implicitly through latent label encodings for multi-label text classification0
Metric Learning on Temporal Graphs via Few-Shot Examples0
Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators0
Show:102550
← PrevPage 133 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified