SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 51765200 of 10580 papers

TitleStatusHype
DiffuseGAE: Controllable and High-fidelity Image Manipulation from Disentangled Representation0
Transformers in Reinforcement Learning: A Survey0
DDNAS: Discretized Differentiable Neural Architecture Search for Text ClassificationCode0
Effective Latent Differential Equation Models via Attention and Multiple Shooting0
Transaction Fraud Detection via Spatial-Temporal-Aware Graph Transformer0
A Causal Ordering Prior for Unsupervised Representation Learning0
Unbiased Scene Graph Generation via Two-stage Causal Modeling0
Graph Contrastive Learning with Multi-Objective for Personalized Product Retrieval in Taobao Search0
Source-Aware Embedding Training on Heterogeneous Information Networks0
Enhancing Cross-lingual Transfer via Phonemic Transcription IntegrationCode0
Neural Causal Graph Collaborative FilteringCode0
Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning0
Diffusion Policies for Out-of-Distribution Generalization in Offline Reinforcement Learning0
Improving Heterogeneous Graph Learning with Weighted Mixed-Curvature Product ManifoldCode0
Text Descriptions are Compressive and Invariant Representations for Visual Learning0
Semi Supervised Meta Learning for Spatiotemporal Learning0
End-to-End Supervised Multilabel Contrastive LearningCode0
Efficient Model-Free Exploration in Low-Rank MDPs0
On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation0
Attentive Graph Enhanced Region Representation Learning0
Policy Contrastive Imitation Learning0
Graph Contrastive Topic ModelCode0
Focusing on what to decode and what to train: SOV Decoding with Specific Target Guided DeNoising and Vision Language AdvisorCode0
Flowchase: a Mobile Application for Pronunciation Training0
Source Identification: A Self-Supervision Task for Dense Prediction0
Show:102550
← PrevPage 208 of 424Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified