SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 371380 of 10580 papers

TitleStatusHype
Neural-MCRL: Neural Multimodal Contrastive Representation Learning for EEG-based Visual DecodingCode1
MOL-Mamba: Enhancing Molecular Representation with Structural & Electronic InsightsCode1
Tokenphormer: Structure-aware Multi-token Graph Transformer for Node ClassificationCode1
USDRL: Unified Skeleton-Based Dense Representation Learning with Multi-Grained Feature DecorrelationCode1
Repository-Level Graph Representation Learning for Enhanced Security Patch DetectionCode1
Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill LearningCode1
Two stages domain invariant representation learners solve the large co-variate shift in unsupervised domain adaptation with two dimensional data domainsCode1
HEAL: Hierarchical Embedding Alignment Loss for Improved Retrieval and Representation LearningCode1
Multi-Scale Representation Learning for Protein Fitness PredictionCode1
Down with the Hierarchy: The 'H' in HNSW Stands for "Hubs"Code1
Show:102550
← PrevPage 38 of 1058Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6CiteBERTAvg.58.8Unverified
7BioBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified