SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 16011625 of 10580 papers

TitleStatusHype
A Fast Knowledge Distillation Framework for Visual RecognitionCode1
Iterative Contrast-Classify For Semi-supervised Temporal Action SegmentationCode1
SAR Image Despeckling Using Continuous Attention ModuleCode1
MutualFormer: Multi-Modality Representation Learning via Cross-Diffusion AttentionCode1
Contrastive Cross-domain Recommendation in MatchingCode1
BEVT: BERT Pretraining of Video TransformersCode1
TokenLearner: Adaptive Space-Time Tokenization for VideosCode1
Representation Learning on Spatial NetworksCode1
Molecular Contrastive Learning with Chemical Element Knowledge GraphCode1
Graph Neural Networks with Adaptive ResidualCode1
Comprehensive Knowledge Distillation with Causal InterventionCode1
Distilling Meta Knowledge on Heterogeneous Graph for Illicit Drug Trafficker Detection on Social MediaCode1
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation LearningCode1
TriBERT: Human-centric Audio-visual Representation LearningCode1
Pooling by Sliced-Wasserstein EmbeddingCode1
Curriculum Disentangled Recommendation with Noisy Multi-feedbackCode1
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?Code1
Diffusion Autoencoders: Toward a Meaningful and Decodable RepresentationCode1
On the Integration of Self-Attention and ConvolutionCode1
Semi-supervised Implicit Scene Completion from Sparse LiDARCode1
Similarity Contrastive Estimation for Self-Supervised Soft Contrastive LearningCode1
HGATE: Heterogeneous Graph Attention Auto-EncodersCode1
Latent Space Smoothing for Individually Fair RepresentationsCode1
DeepGate: Learning Neural Representations of Logic GatesCode1
Semantic-Aware Generation for Self-Supervised Visual Representation LearningCode1
Show:102550
← PrevPage 65 of 424Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified