SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 94519500 of 10580 papers

TitleStatusHype
Drop-DTW: Aligning Common Signal Between Sequences While Dropping Outliers0
Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks0
Privacy-preserving Voice Analysis via Disentangled Representations0
Dropout Prediction over Weeks in MOOCs via Interpretable Multi-Layer Representation Learning0
Dropout Training for SVMs with Data Augmentation0
Dropping Convexity for More Efficient and Scalable Online Multiview Learning0
Privacy Safe Representation Learning via Frequency Filtering Encoder0
Private-Shared Disentangled Multimodal VAE for Learning of Hybrid Latent Representations0
DSVAE: Interpretable Disentangled Representation for Synthetic Speech Detection0
DTFormer: A Transformer-Based Method for Discrete-Time Dynamic Graph Representation Learning0
DTN: Deep Multiple Task-specific Feature Interactions Network for Multi-Task Recommendation0
Privileged Zero-Shot AutoML0
Dual-Channel Multiplex Graph Neural Networks for Recommendation0
Dual-constrained Deep Semi-Supervised Coupled Factorization Network with Enriched Prior0
Dual Contradistinctive Generative Autoencoder0
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models0
Dual Contrastive Learning for Spatio-temporal Representation0
MolTRES: Improving Chemical Language Representation Learning for Molecular Property Prediction0
Self-Supervised Graph Representation Learning via Global Context Prediction0
Probabilistic Latent Variable Modeling for Dynamic Friction Identification and Estimation0
A Cyclically-Trained Adversarial Network for Invariant Representation Learning0
Dual Encoder-Decoder based Generative Adversarial Networks for Disentangled Facial Representation Learning0
Probabilistic Lexical Manifold Construction in Large Language Models via Hierarchical Vector Field Interpolation0
Dual-Granularity Contrastive Learning for Session-based Recommendation0
Probabilistic Multimodal Representation Learning0
Dual Graph Complementary Network0
Dual Graph Representation Learning0
DualHGNN: A Dual Hypergraph Neural Network for Semi-Supervised Node Classification based on Multi-View Learning and Density Awareness0
Probabilistic Representations for Video Contrastive Learning0
Probabilistic World Modeling with Asymmetric Distance Measure0
Dynamic Traceback Learning for Medical Report Generation0
Dual-Modality Representation Learning for Molecular Property Prediction0
Dual Motion GAN for Future-Flow Embedded Video Prediction0
Dual-Neighborhood Deep Fusion Network for Point Cloud Analysis0
Probing Negative Sampling Strategies to Learn GraphRepresentations via Unsupervised Contrastive Learning0
Dual Space Graph Contrastive Learning0
Dual-space Hierarchical Learning for Goal-guided Conversational Recommendation0
Probing Contextual Language Models for Common Ground with Visual Representations0
Dual Transformer for Point Cloud Analysis0
Probing the Robustness of Independent Mechanism Analysis for Representation Learning0
AC-VAE: Learning Semantic Representation with VAE for Adaptive Clustering0
Duplex: Dual Prototype Learning for Compositional Zero-Shot Learning0
DuRep: Dual-Mode Speech Representation Learning via ASR-Aware Distillation0
Probing Visual-Audio Representation for Video Highlight Detection via Hard-Pairs Guided Contrastive Learning0
DYAN: A Dynamical Atoms-Based Network for Video Prediction0
DyGMamba: Efficiently Modeling Long-Term Temporal Dependency on Continuous-Time Dynamic Graphs with State Space Models0
DyGSSM: Multi-view Dynamic Graph Embeddings with State Space Model Gradient Update0
Dynamic-Aware Spatio-temporal Representation Learning for Dynamic MRI Reconstruction0
Dynamic Community Detection via Adversarial Temporal Graph Representation Learning0
Procedural Generalization by Planning with Self-Supervised World Models0
Show:102550
← PrevPage 190 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified