SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 67516800 of 10580 papers

TitleStatusHype
A New Perspective to Boost Vision Transformer for Medical Image Classification0
Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning"0
Exploring Efficient-Tuned Learning Audio Representation Method from BriVL0
New Benchmark for Household Garbage Image Recognition0
Hyperbolic Knowledge Transfer in Cross-Domain Recommendation System0
Hyperbolic Image-and-Pointcloud Contrastive Learning for 3D Classification0
Accelerating Graph Sampling for Graph Machine Learning using GPUs0
Deep Learning in Cardiology0
nGPT: Normalized Transformer with Representation Learning on the Hypersphere0
Exploring Set Similarity for Dense Self-supervised Representation Learning0
Hyperbolic Graph Representation Learning: A Tutorial0
Deep Learning for Spatio-Temporal Data Mining: A Survey0
BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing0
NLP and Online Health Reports: What do we say and what do we mean?0
NODDLE: Node2vec based deep learning model for link prediction0
node2coords: Graph Representation Learning with Wasserstein Barycenters0
Adversarial Attack on Hierarchical Graph Pooling Neural Networks0
On the Provable Advantage of Unsupervised Pretraining0
Node Classification Meets Link Prediction on Knowledge Graphs0
Node Embeddings via Neighbor Embeddings0
Node Level Graph Autoencoder: Unified Pretraining for Textual Graph Learning0
3D Vision-Language Gaussian Splatting0
Deep learning for neuroimaging: a validation study0
Node Representation Learning for Directed Graphs0
On the Power of Randomization in Fair Classification and Representation0
Hyperbolic Deep Learning in Computer Vision: A Survey0
Exploring the Combination of Contextual Word Embeddings and Knowledge Graph Embeddings0
No Free Lunch in Self Supervised Representation Learning0
Exploring the Effectiveness of Object-Centric Representations in Visual Question Answering: Comparative Insights with Foundation Models0
Attribute Prototype Network for Any-Shot Learning0
Blind Image Super-Resolution via Contrastive Representation Learning0
Hyperbolic Contrastive Learning0
Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit0
A New Modal Autoencoder for Functionally Independent Feature Extraction0
On the Pros and Cons of Momentum Encoder in Self-Supervised Visual Representation Learning0
On the relationship between Normalising Flows and Variational- and Denoising Autoencoders0
Non-contrastive representation learning for intervals from well logs0
Context-Aware Smoothing for Neural Machine Translation0
On the Transfer of Disentangled Representations in Realistic Settings0
Sample-efficient Adversarial Imitation Learning0
Open-Set Representation Learning through Combinatorial Embedding0
Attributes-aware Visual Emotion Representation Learning0
Exploring the Value of Multi-View Learning for Session-Aware Query Representation0
Nonlinear Independent Component Analysis for Principled Disentanglement in Unsupervised Deep Learning0
Overcoming Data Sparsity in Group Recommendation0
Nonlinear spiked covariance matrices and signal propagation in deep neural networks0
Deep Learning-based Pupil Center Detection for Fast and Accurate Eye Tracking System0
Exploring Transferable Homogeneous Groups for Compositional Zero-Shot Learning0
HYFuse: Aligning Heterogeneous Speech Pre-Trained Representations in Hyperbolic Space for Speech Emotion Recognition0
HYDEN: Hyperbolic Density Representations for Medical Images and Reports0
Show:102550
← PrevPage 136 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified