SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 20012050 of 10580 papers

TitleStatusHype
BayReL: Bayesian Relational Learning for Multi-omics Data IntegrationCode1
DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation LearningCode1
Self-Supervised Place Recognition by Refining Temporal and Featural Pseudo Labels from Panoramic DataCode1
Domain-Invariant Representation Learning from EEG with Private EncodersCode1
Semantically Guided Representation Learning For Action AnticipationCode1
Down with the Hierarchy: The 'H' in HNSW Stands for "Hubs"Code1
CrossWalk: Fairness-enhanced Node Representation LearningCode1
Semantic-Aware Dual Contrastive Learning for Multi-label Image ClassificationCode1
Probabilistic Contrastive Learning for Domain AdaptationCode1
Semantic Entity Retrieval ToolkitCode1
Semantic Relation-aware Difference Representation Learning for Change CaptioningCode1
CSformer: Bridging Convolution and Transformer for Compressive SensingCode1
Semi-supervised Implicit Scene Completion from Sparse LiDARCode1
Semi-Supervised Junction Tree Variational Autoencoder for Molecular Property PredictionCode1
CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted InstancesCode1
3D Human Action Representation Learning via Cross-View Consistency PursuitCode1
Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICACode1
Sequence Level Contrastive Learning for Text SummarizationCode1
Beyond One Shot, Beyond One Perspective: Cross-View and Long-Horizon Distillation for Better LiDAR RepresentationsCode1
Set2Box: Similarity Preserving Representation Learning of SetsCode1
Shadow Neural Radiance Fields for Multi-view Satellite PhotogrammetryCode1
Beyond Normal: On the Evaluation of Mutual Information EstimatorsCode1
DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive LearningCode1
Siamese DETRCode1
Sign and Basis Invariant Networks for Spectral Graph Representation LearningCode1
SIGN: Scalable Inception Graph Neural NetworksCode1
Similarity Contrastive Estimation for Image and Video Soft Contrastive Self-Supervised LearningCode1
Curious Representation Learning for Embodied IntelligenceCode1
CURL: Contrastive Unsupervised Representation Learning for Reinforcement LearningCode1
Be More with Less: Hypergraph Attention Networks for Inductive Text ClassificationCode1
DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global FeaturesCode1
Curriculum DeepSDFCode1
Curriculum Disentangled Recommendation with Noisy Multi-feedbackCode1
Curriculum-Meta Learning for Order-Robust Continual Relation ExtractionCode1
Simple, Good, Fast: Self-Supervised World Models Free of BaggageCode1
Simplicial Attention NetworksCode1
Benchmarking Omni-Vision Representation through the Lens of Visual RealmsCode1
Simplified Temporal Consistency Reinforcement LearningCode1
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language TuningCode1
Discover and Align Taxonomic Context Priors for Open-world Semi-Supervised LearningCode1
Discover and Align Taxonomic Context Priors for Open-world Semi-Supervised LearningCode1
CyCLIP: Cyclic Contrastive Language-Image PretrainingCode1
Large-Scale Chemical Language Representations Capture Molecular Structure and PropertiesCode1
Do learned representations respect causal relationships?Code1
Single Domain Generalization for LiDAR Semantic SegmentationCode1
Single Image 3D Shape Retrieval via Cross-Modal Instance and Category Contrastive LearningCode1
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?Code1
Does Zero-Shot Reinforcement Learning Exist?Code1
An Auto-Encoder Strategy for Adaptive Image SegmentationCode1
Do Generated Data Always Help Contrastive Learning?Code1
Show:102550
← PrevPage 41 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified