SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 99019950 of 10580 papers

TitleStatusHype
Fair Sufficient Representation Learning0
False Negative Distillation and Contrastive Learning for Personalized Outfit Recommendation0
FAM: Visual Explanations for the Feature Representations From Deep Convolutional Networks0
PSLF: A PID Controller-incorporated Second-order Latent Factor Analysis Model for Recommender System0
PTab: Using the Pre-trained Language Model for Modeling Tabular Data0
FARM: Functional Group-Aware Representations for Small Molecules0
FASG: Feature Aggregation Self-training GCN for Semi-supervised Node Classification0
Fast Adaptive Federated Bilevel Optimization0
Fast and Accurate Power Load Data Completion via Regularization-optimized Low-Rank Factorization0
Fast and Robust Contextual Node Representation Learning over Dynamic Graphs0
Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits0
Fast and scalable learning of neuro-symbolic representations of biomedical knowledge0
PT-Tuning: Bridging the Gap between Time Series Masked Reconstruction and Forecasting via Prompt Token Tuning0
Bailing-TTS: Chinese Dialectal Speech Synthesis Towards Human-like Spontaneous Representation0
Self-Supervised Learning for Medical Image Data with Anatomy-Oriented Imaging Planes0
FastICARL: Fast Incremental Classifier and Representation Learning with Efficient Budget Allocation in Audio Sensing Applications0
Purposer: Putting Human Motion Generation in Context0
Fast Node Embeddings: Learning Ego-Centric Representations0
A Crystal-Specific Pre-Training Framework for Crystal Material Property Prediction0
FAVAE: SEQUENCE DISENTANGLEMENT USING IN- FORMATION BOTTLENECK PRINCIPLE0
Towards Better Few-Shot and Finetuning Performance with Forgetful Causal Language Models0
FCOM: A Federated Collaborative Online Monitoring Framework via Representation Learning0
fCOP: Focal Length Estimation from Category-level Object Priors0
FDVTS's Solution for 2nd COV19D Competition on COVID-19 Detection and Severity Analysis0
Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability0
Feature Affinity based Pseudo Labeling for Semi-supervised Person Re-identification0
FEATURE-AUGMENTED HYPERGRAPH NEURAL NETWORKS0
Feature-Based Lie Group Transformer for Real-World Applications0
Feature-based Neural Language Model and Chinese Word Segmentation0
Feature Decoupling in Self-supervised Representation Learning for Open Set Recognition0
Feature Disentanglement of Robot Trajectories0
Pushing the Limits of 3D Shape Generation at Scale0
Feature Forgetting in Continual Representation Learning0
Feature-guided Neural Model Training for Supervised Document Representation Learning0
Feature Imitating Networks0
Feature Incay for Representation Regularization0
Feature Interactive Representation for Point Cloud Registration0
Feature Matching Intervention: Leveraging Observational Data for Causal Representation Learning0
Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics0
Feature Projection for Improved Text Classification0
Feature Propagation on Graph: A New Perspective to Graph Representation Learning0
Feature Representation Learning for Click-through Rate Prediction: A Review and New Perspectives0
Feature Representation Learning for NL2SQL Generation Based on Coupling and Decoupling0
Feature Representation Learning with Adaptive Displacement Generation and Transformer Fusion for Micro-Expression Recognition0
Feature Transformers: A Unified Representation Learning Framework for Lifelong Learning0
FedAvg with Fine Tuning: Local Updates Lead to Representation Learning0
FedCiR: Client-Invariant Representation Learning for Federated Non-IID Features0
Self-supervised On-device Federated Learning from Unlabeled Streams0
FedCRL: Personalized Federated Learning with Contrastive Shared Representations for Label Heterogeneity in Non-IID Data0
FedDAR: Federated Domain-Aware Representation Learning0
Show:102550
← PrevPage 199 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified