SOTAVerified

Representation Learning

Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.

Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.

Representation learning can be divided into:

  • Supervised representation learning: learning representations on task A using annotated data and used to solve task B
  • Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.

More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.

Here are some additional readings to go deeper on the task:

( Image credit: Visualizing and Understanding Convolutional Networks )

Papers

Showing 551600 of 10580 papers

TitleStatusHype
Contrastive Cross-domain Recommendation in MatchingCode1
Contrastive Code Representation LearningCode1
A Broad Study on the Transferability of Visual Representations with Contrastive LearningCode1
Contrastive Continual Learning with Importance Sampling and Prototype-Instance Relation DistillationCode1
Contrastive Difference Predictive CodingCode1
Contrastive Learning of Generalized Game RepresentationsCode1
ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design ModelsCode1
Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain ActivitiesCode1
Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-SeriesCode1
Continuous MDP Homomorphisms and Homomorphic Policy GradientCode1
Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation LearningCode1
Continuous-Time and Multi-Level Graph Representation Learning for Origin-Destination Demand PredictionCode1
Contrasting Contrastive Self-Supervised Representation Learning PipelinesCode1
Continual Learning, Fast and SlowCode1
Deep High-Resolution Representation Learning for Visual RecognitionCode1
Continual Learning for Image Segmentation with Dynamic QueryCode1
Contextual Representation Learning beyond Masked Language ModelingCode1
Active Learning Through a Covering LensCode1
Contextual Vision Transformers for Robust Representation LearningCode1
Continual Prototype Evolution: Learning Online from Non-Stationary Data StreamsCode1
Contrasting with Symile: Simple Model-Agnostic Representation Learning for Unlimited ModalitiesCode1
Contrastive Learning of Global-Local Video RepresentationsCode1
CoReEcho: Continuous Representation Learning for 2D+time Echocardiography AnalysisCode1
Constrained Contrastive Distribution Learning for Unsupervised Anomaly Detection and Localisation in Medical ImagesCode1
Temporal Context Aggregation for Video Retrieval with Contrastive LearningCode1
CONQUER: Contextual Query-aware Ranking for Video Corpus Moment RetrievalCode1
A robust estimator of mutual information for deep learning interpretabilityCode1
Consistent Representation Learning for Continual Relation ExtractionCode1
Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document EmbeddingsCode1
Conditional Sound Generation Using Neural Discrete Time-Frequency Representation LearningCode1
Concept Generalization in Visual Representation LearningCode1
Conformer: Local Features Coupling Global Representations for Visual RecognitionCode1
A Fast Knowledge Distillation Framework for Visual RecognitionCode1
Concatenated Masked Autoencoders as Spatial-Temporal LearnerCode1
Congested Crowd Instance Localization with Dilated Convolutional Swin TransformerCode1
Context Matters: Graph-based Self-supervised Representation Learning for Medical ImagesCode1
A Fair Comparison of Graph Neural Networks for Graph ClassificationCode1
COMPLETER: Incomplete Multi-view Clustering via Contrastive PredictionCode1
Communicative Subgraph Representation Learning for Multi-Relational Inductive Drug-Gene Interaction PredictionCode1
Actionness Inconsistency-guided Contrastive Learning for Weakly-supervised Temporal Action LocalizationCode1
Complete Dictionary Learning via _p-norm MaximizationCode1
Comprehensive Knowledge Distillation with Causal InterventionCode1
COME: Adding Scene-Centric Forecasting Control to Occupancy World ModelCode1
Combating Representation Learning Disparity with Geometric HarmonizationCode1
COMEX: A Tool for Generating Customized Source Code RepresentationsCode1
CoMatch: Semi-supervised Learning with Contrastive Graph RegularizationCode1
A Rotated Hyperbolic Wrapped Normal Distribution for Hierarchical Representation LearningCode1
Combating Label Noise in Deep Learning Using AbstentionCode1
Context Shift Reduction for Offline Meta-Reinforcement LearningCode1
Action-Based Representation Learning for Autonomous DrivingCode1
Show:102550
← PrevPage 12 of 212Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1SciNCLAvg.81.8Unverified
2SPECTERAvg.80Unverified
3CiteomaticAvg.76Unverified
4Sci-DeCLUTRAvg.66.6Unverified
5SciBERTAvg.59.6Unverified
6CiteBERTAvg.58.8Unverified
7BioBERTAvg.58.8Unverified
#ModelMetricClaimedVerifiedStatus
1top_model_weights_with_3d_21:1 Accuracy0.75Unverified
#ModelMetricClaimedVerifiedStatus
1Resnet 18Accuracy (%)97.05Unverified
#ModelMetricClaimedVerifiedStatus
1Morphological NetworkAccuracy97.3Unverified
#ModelMetricClaimedVerifiedStatus
1Max Margin ContrastiveSilhouette Score0.56Unverified