SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 39263950 of 10580 papers

TitleStatusHype
Global Convergence and Rich Feature Learning in L-Layer Infinite-Width Neural Networks under μP Parametrization0
Global Interaction Modelling in Vision Transformer via Super Tokens0
Global Intervention and Distillation for Federated Out-of-Distribution Generalization0
Code Representation Learning At Scale0
Global-Local GCN: Large-Scale Label Noise Cleansing for Face Recognition0
Global-Locally Self-Attentive Encoder for Dialogue State Tracking0
Cross-Modal Alignment Learning of Vision-Language Conceptual Systems0
Global Optimality in Neural Network Training0
Efficient Receptive Field Learning by Dynamic Gaussian Structure0
Efficient Planning with Latent Diffusion0
CODER: Coupled Diversity-Sensitive Momentum Contrastive Learning for Image-Text Retrieval0
Efficient Object-centric Representation Learning with Pre-trained Geometric Prior0
AdaFedFR: Federated Face Recognition with Adaptive Inter-Class Representation Learning0
Efficient Multiscale Multimodal Bottleneck Transformer for Audio-Video Classification0
CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training0
Cross-modal Common Representation Learning by Hybrid Transfer Network0
Hybrid deep learning methods for phenotype prediction from clinical notes0
Hybrid Distillation: Connecting Masked Autoencoders with Contrastive Learners0
GNEG: Graph-Based Negative Sampling for word2vec0
Hybrid Graph: A Unified Graph Representation with Datasets and Benchmarks for Complex Graphs0
Cross-Modal Discrete Representation Learning0
Efficient Multi-Model Fusion with Adversarial Complementary Representation Learning0
Code Completion by Modeling Flattened Abstract Syntax Trees as Graphs0
Efficient Model-Free Exploration in Low-Rank MDPs0
A Survey on Bridging EEG Signals and Generative AI: From Image and Text to Beyond0
Show:102550
← PrevPage 158 of 424Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified