SOTAVerified

Self-Supervised Learning

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Papers

Showing 401450 of 5044 papers

TitleStatusHype
Dive into Big Model TrainingCode1
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio RepresentationCode1
Broaden Your Views for Self-Supervised Video LearningCode1
Concept Generalization in Visual Representation LearningCode1
A Survey on Self-Supervised Graph Foundation Models: Knowledge-Based PerspectiveCode1
A Symbolic Character-Aware Model for Solving Geometry ProblemsCode1
Distilling Visual Priors from Self-Supervised LearningCode1
A Systematic Comparison of Phonetic Aware Techniques for Speech EnhancementCode1
CONSAC: Robust Multi-Model Fitting by Conditional Sample ConsensusCode1
Consistency-based Self-supervised Learning for Temporal Anomaly LocalizationCode1
Dive into Self-Supervised Learning for Medical Image Analysis: Data, Models and TasksCode1
DOBF: A Deobfuscation Pre-Training Objective for Programming LanguagesCode1
Do Your Best and Get Enough Rest for Continual LearningCode1
Container: Context Aggregation NetworksCode1
Context Matters: Graph-based Self-supervised Representation Learning for Medical ImagesCode1
Contextually Affinitive Neighborhood Refinery for Deep ClusteringCode1
Bootstrap your own latent: A new approach to self-supervised LearningCode1
ATST: Audio Representation Learning with Teacher-Student TransformerCode1
Energy-Based Contrastive Learning of Visual RepresentationsCode1
Enhanced Masked Image Modeling to Avoid Model Collapse on Multi-modal MRI DatasetsCode1
Attention Distillation: self-supervised vision transformer students need more guidanceCode1
Adversarial Self-Supervised Contrastive LearningCode1
Bootstrap Your Own Latent - A New Approach to Self-Supervised LearningCode1
Continually Learning Self-Supervised Representations with Projected Functional RegularizationCode1
Large-Scale Representation Learning on Graphs via BootstrappingCode1
Attentive Symmetric Autoencoder for Brain MRI SegmentationCode1
Bootstrapping Autonomous Driving Radars with Self-Supervised LearningCode1
Audio-Adaptive Activity Recognition Across Video DomainsCode1
Dissecting Self-Supervised Learning Methods for Surgical Computer VisionCode1
Equivariant Contrastive LearningCode1
Contrastive Hierarchical ClusteringCode1
Contrastive Graph Learning for Population-based fMRI ClassificationCode1
Contrastive Learning with Cross-Modal Knowledge Mining for Multimodal Human Activity RecognitionCode1
Contrastive Self-Supervised Learning for Commonsense ReasoningCode1
Audio-Visual Instance Discrimination with Cross-Modal AgreementCode1
Contrastive Learning Is Spectral Clustering On Similarity GraphCode1
A Review on Self-Supervised Learning for Time Series Anomaly Detection: Recent Advances and Open ChallengesCode1
Contrastive Learning of Musical RepresentationsCode1
AASAE: Augmentation-Augmented Stochastic AutoencodersCode1
Contrastive Learning with Synthetic PositivesCode1
M3-Jepa: Multimodal Alignment via Multi-directional MoE based on the JEPA frameworkCode1
Augmentation-Free Self-Supervised Learning on GraphsCode1
Contrastive Multi-View Representation Learning on GraphsCode1
Contrastive Neural Processes for Self-Supervised LearningCode1
ATD: Augmenting CP Tensor Decomposition by Self SupervisionCode1
Augmenting Reinforcement Learning with Transformer-based Scene Representation Learning for Decision-making of Autonomous DrivingCode1
scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell DataCode1
Contrastive Self-supervised Sequential Recommendation with Robust AugmentationCode1
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language ProcessingCode1
3D Object Detection with a Self-supervised Lidar Scene Flow BackboneCode1
Show:102550
← PrevPage 9 of 101Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pretraining: NoneImages & Text57.5Unverified
2Pretraining: ShEDImages & Text54.3Unverified
3Pretraining: e-MixImages & Text48.9Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50Accuracy91.7Unverified
2ResNet18Accuracy91.02Unverified
3MV-MRAccuracy89.67Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy93.89Unverified
2ResNet18average top-1 classification accuracy92.58Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy72.51Unverified
2ResNet18average top-1 classification accuracy69.31Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy82.64Unverified
2CorInfomax (ResNet18)Top-1 Accuracy80.48Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy51.84Unverified
2ResNet18average top-1 classification accuracy51.67Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy93.18Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy71.61Unverified
#ModelMetricClaimedVerifiedStatus
1Hybrid BYOL-S/CvTAccuracy67.2Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy54.86Unverified