SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 62016250 of 10580 papers

TitleStatusHype
Linear-Time Sequence Classification using Restricted Boltzmann Machines0
MoCLIM: Towards Accurate Cancer Subtyping via Multi-Omics Contrastive Learning with Omics-Inference Modeling0
Modality-Agnostic Learning for Medical Image Segmentation Using Multi-modality Self-distillation0
Modality-Agnostic Structural Image Representation Learning for Deformable Multi-Modality Medical Image Registration0
Linear Matrix Factorization Embeddings for Single-objective Optimization Landscapes0
Modality Compensation Network: Cross-Modal Adaptation for Action Recognition0
Linear Disentangled Representations and Unsupervised Action Estimation0
Linear causal disentanglement via higher-order cumulants0
LINDA: Multi-Agent Local Information Decomposition for Awareness of Teammates0
DYAN: A Dynamical Atoms-Based Network for Video Prediction0
CLERF: Contrastive LEaRning for Full Range Head Pose Estimation0
Model-Agnostic and Diverse Explanations for Streaming Rumour Graphs0
Limits of End-to-End Learning0
Model Debiasing via Gradient-based Explanation on Representation0
Limitations of Neural Collapse for Understanding Generalization in Deep Learning0
Model-free Representation Learning and Exploration in Low-rank MDPs0
Limitations of Cross-Lingual Learning from Image Search0
Modeling Document-Level Context for Event Detection via Important Context Selection0
LiGNN: Graph Neural Networks at LinkedIn0
Modeling Event Propagation via Graph Biased Temporal Point Process0
DuRep: Dual-Mode Speech Representation Learning via ASR-Aware Distillation0
Modeling Graph Node Correlations with Neighbor Mixture Models0
Lightweight Structure-Aware Attention for Visual Understanding0
Modeling Large-Scale Structured Relationships with Shared Memory for Knowledge Base Completion0
Modeling Multi-Hop Semantic Paths for Recommendation in Heterogeneous Information Networks0
Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled Representation0
Duplex: Dual Prototype Learning for Compositional Zero-Shot Learning0
Clearing the Path for Truly Semantic Representation Learning0
Aspect-driven User Preference and News Representation Learning for News Recommendation0
A General Framework for Content-enhanced Network Representation Learning0
Active Perception and Representation for Robotic Manipulation0
Active Multi-Task Representation Learning0
Lightweight Modality Adaptation to Sequential Recommendation via Correlation Supervision0
CCPL: Cross-modal Contrastive Protein Learning0
CLeaRForecast: Contrastive Learning of High-Purity Representations for Time Series Forecasting0
Lightly-supervised Representation Learning with Global Interpretability0
Model Provenance via Model DNA0
Dual Transformer for Point Cloud Analysis0
Lift, Splat, Map: Lifting Foundation Masks for Label-Free Semantic Scene Completion0
Contrastive Representation Disentanglement for Clustering0
A Spatiotemporal Correspondence Approach to Unsupervised LiDAR Segmentation with Traffic Applications0
LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning0
Lifted Rule Injection for Relation Embeddings0
Lifestyle-Informed Personalized Blood Biomarker Prediction via Novel Representation Learning0
Lifelong Learning with Weighted Majority Votes0
Lifelong Learning of Hate Speech Classification on Social Media0
Lifelong Knowledge-Enriched Social Event Representation Learning0
Dual-space Hierarchical Learning for Goal-guided Conversational Recommendation0
STELLA: Continual Audio-Video Pre-training with Spatio-Temporal Localized Alignment0
Dual Space Graph Contrastive Learning0
Show:102550
← PrevPage 125 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified