SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 12511300 of 10580 papers

TitleStatusHype
Expressing Multivariate Time Series as Graphs with Time Series Attention TransformerCode1
Self-Supervised Place Recognition by Refining Temporal and Featural Pseudo Labels from Panoramic DataCode1
Representation Learning for the Automatic Indexing of Sound Effects LibrariesCode1
Modeling Two-Way Selection Preference for Person-Job FitCode1
Prompt Vision Transformer for Domain GeneralizationCode1
A Hybrid Self-Supervised Learning Framework for Vertical Federated LearningCode1
Efficient Multimodal Transformer with Dual-Level Feature Restoration for Robust Multimodal Sentiment AnalysisCode1
Toward Interpretable Sleep Stage Classification Using Cross-Modal TransformersCode1
Scaling Up Dynamic Graph Representation Learning via Spiking Neural NetworksCode1
GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural NetworksCode1
Practical Vertical Federated Learning with Unsupervised Representation LearningCode1
RenyiCL: Contrastive Representation Learning with Skew Renyi DivergenceCode1
Semi-Supervised Junction Tree Variational Autoencoder for Molecular Property PredictionCode1
Generative Action Description Prompts for Skeleton-based Action RecognitionCode1
Motif-based Graph Representation Learning with Application to Chemical MoleculesCode1
Exploring Resolution and Degradation Clues as Self-supervised Signal for Low Quality Object DetectionCode1
Localized Sparse Incomplete Multi-view ClusteringCode1
Disentangled Representation Learning for RF Fingerprint Extraction under Unknown Channel StatisticsCode1
OpenCon: Open-world Contrastive LearningCode1
SC6D: Symmetry-agnostic and Correspondence-free 6D Object Pose EstimationCode1
Convolutional Fine-Grained Classification with Self-Supervised Target Relation RegularizationCode1
Large-Scale Product Retrieval with Weakly Supervised Representation LearningCode1
Revisiting the Critical Factors of Augmentation-Invariant Representation LearningCode1
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement LearningCode1
ScaleFormer: Revisiting the Transformer-based Backbones from a Scale-wise Perspective for Medical Image SegmentationCode1
Static and Dynamic Concepts for Self-supervised Video Representation LearningCode1
Jigsaw-ViT: Learning Jigsaw Puzzles in Vision TransformerCode1
Generative Subgraph Contrast for Self-Supervised Graph Representation LearningCode1
Homomorphism Autoencoder -- Learning Group Structured Representations from Observed TransitionsCode1
Deep Laparoscopic Stereo Matching with TransformersCode1
Online Knowledge Distillation via Mutual Contrastive Learning for Visual RecognitionCode1
Self-supervised contrastive learning of echocardiogram videos enables label-efficient cardiac disease diagnosisCode1
μKG: A Library for Multi-source Knowledge Graph Embeddings and ApplicationsCode1
Automated Dilated Spatio-Temporal Synchronous Graph Modeling for Traffic PredictionCode1
Adaptive Soft Contrastive LearningCode1
Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial RobustnessCode1
Hyper-Representations for Pre-Training and Transfer LearningCode1
Leveraging Natural Supervision for Language Representation Learning and GenerationCode1
UFO: Unified Feature OptimizationCode1
Tailoring Self-Supervision for Supervised LearningCode1
Hierarchically Self-Supervised Transformer for Human Skeleton Representation LearningCode1
Feature Representation Learning for Unsupervised Cross-domain Image RetrievalCode1
Beyond Homophily: Structure-aware Path Aggregation Graph Neural NetworkCode1
Balanced Contrastive Learning for Long-Tailed Visual RecognitionCode1
DHGE: Dual-View Hyper-Relational Knowledge Graph Embedding for Link Prediction and Entity TypingCode1
FunQG: Molecular Representation Learning Via Quotient GraphsCode1
GANDALF: Gated Adaptive Network for Deep Automated Learning of FeaturesCode1
Semantic Novelty Detection via Relational ReasoningCode1
FashionViL: Fashion-Focused Vision-and-Language Representation LearningCode1
Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeXCode1
Show:102550
← PrevPage 26 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified