SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 13011350 of 10580 papers

TitleStatusHype
Benchmarking Omni-Vision Representation through the Lens of Visual RealmsCode1
Unified 2D and 3D Pre-Training of Molecular RepresentationsCode1
Deep Image Clustering with Contrastive Learning and Multi-scale Graph Convolutional NetworksCode1
Proposal-Free Temporal Action Detection via Global Segmentation Mask LearningCode1
Masked Autoencoders that ListenCode1
The DLCC Node Classification Benchmark for Analyzing Knowledge Graph EmbeddingsCode1
Temporal Disentanglement of Representations for Improved Generalisation in Reinforcement LearningCode1
TASKOGRAPHY: Evaluating robot task planning over large 3D scene graphsCode1
A clinically motivated self-supervised approach for content-based image retrieval of CT liver imagesCode1
A Proposal of Multi-Layer Perceptron with Graph Gating Unit for Graph Representation Learning and its Application to Surrogate Model for FEMCode1
Sudowoodo: Contrastive Self-supervised Learning for Multi-purpose Data Integration and PreparationCode1
Graph-based Molecular Representation LearningCode1
GFNet: Geometric Flow Network for 3D Point Cloud Semantic SegmentationCode1
Weakly Supervised Grounding for VQA in Vision-Language TransformersCode1
Vision-based Uneven BEV Representation Learning with Polar Rasterization and Surface EstimationCode1
Masked Autoencoders in 3D Point Cloud Representation LearningCode1
Invariant and Transportable Representations for Anti-Causal Domain ShiftsCode1
Boundary-Guided Camouflaged Object DetectionCode1
PolarFormer: Multi-camera 3D Object Detection with Polar TransformerCode1
Denoised MDPs: Learning World Models Better Than the World ItselfCode1
Continuous-Time and Multi-Level Graph Representation Learning for Origin-Destination Demand PredictionCode1
Self-Supervised Learning for Multimedia RecommendationCode1
Laplacian Autoencoders for Learning Stochastic RepresentationsCode1
BATFormer: Towards Boundary-Aware Lightweight Transformer for Efficient Medical Image SegmentationCode1
SSL-Lanes: Self-Supervised Learning for Motion Forecasting in Autonomous DrivingCode1
Measuring and Improving the Use of Graph Information in Graph Neural NetworksCode1
A Representation Learning Framework for Property GraphsCode1
Vision Transformer for Contrastive ClusteringCode1
Utilizing Expert Features for Contrastive Learning of Time-Series RepresentationsCode1
Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive LearningCode1
Variational Distillation for Multi-View LearningCode1
SSM-DTA: Breaking the Barriers of Data Scarcity in Drug-Target Affinity PredictionCode1
MET: Masked Encoding for Tabular DataCode1
BridgeTower: Building Bridges Between Encoders in Vision-Language Representation LearningCode1
Boosting Graph Structure Learning with Dummy NodesCode1
DenseMTL: Cross-task Attention Mechanism for Dense Multi-task LearningCode1
Learning Fair Representation via Distributional Contrastive DisentanglementCode1
MixGen: A New Multi-Modal Data AugmentationCode1
NCAGC: A Neighborhood Contrast Framework for Attributed Graph ClusteringCode1
Time Interval-enhanced Graph Neural Network for Shared-account Cross-domain Sequential RecommendationCode1
Patch-level Representation Learning for Self-supervised Vision TransformersCode1
Masked Frequency Modeling for Self-Supervised Visual Pre-TrainingCode1
Taxonomy of Benchmarks in Graph Representation LearningCode1
A Simple Data Mixing Prior for Improving Self-Supervised LearningCode1
GraphMLP: A Graph MLP-Like Architecture for 3D Human Pose EstimationCode1
MetaTPTrans: A Meta Learning Approach for Multilingual Code Representation LearningCode1
Causal Representation Learning for Instantaneous and Temporal Effects in Interactive SystemsCode1
Soft-mask: Adaptive Substructure Extractions for Graph Neural NetworksCode1
Balanced Product of Calibrated Experts for Long-Tailed RecognitionCode1
COSTA: Covariance-Preserving Feature Augmentation for Graph Contrastive LearningCode1
Show:102550
← PrevPage 27 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6BioBERTAvg.58.8Unverified
7CiteBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified