SOTAVerified

Self-Supervised Learning

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Papers

Showing 45514600 of 5044 papers

TitleStatusHype
Learning Visual Representations for Transfer Learning by Suppressing TextureCode1
Patch2Self: Denoising Diffusion MRI with Self-Supervised LearningCode1
NICT Kyoto Submission for the WMT’20 Quality Estimation Task: Intermediate Training for Domain and Task Adaptation0
Response Selection for Multi-Party Conversations with Dynamic Topic Tracking0
Understanding Pre-trained BERT for Aspect-based Sentiment AnalysisCode1
Scene Flow from Point Clouds with or without Learning0
Self-supervised Representation Learning for Evolutionary Neural Architecture SearchCode0
A Survey on Contrastive Self-supervised Learning0
Joint Masked CPC and CTC Training for ASRCode1
PAL : Pretext-based Active Learning0
Combining Self-Training and Self-Supervised Learning for Unsupervised Disfluency DetectionCode1
Pretext-Contrastive Learning: Toward Good Practices in Self-supervised Video Representation LeaningCode1
Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature FusionCode1
Refactoring Policy for Compositional Generalizability using Self-Supervised Object Proposals0
Multi-object tracking with self-supervised associating network0
XLVIN: eXecuted Latent Value Iteration Nets0
Pre-training Text-to-Text Transformers for Concept-centric Common SenseCode1
BARThez: a Skilled Pretrained French Sequence-to-Sequence ModelCode1
LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration0
Graph Contrastive Learning with AugmentationsCode1
Self-Supervised Shadow Removal0
Contrastive Learning with Adversarial Examples0
Contrastive Self-Supervised Learning for Wireless Power ControlCode0
Self-Alignment Pretraining for Biomedical Entity RepresentationsCode1
Self-supervised Human Activity Recognition by Learning to Predict Cross-Dimensional Motion0
Self-supervised Graph Learning for RecommendationCode1
Self-Supervised Learning of Part Mobility from Point Cloud SequenceCode0
BYOL works even without batch statisticsCode2
Understanding YouTube Communities via Subscription-based Channel EmbeddingsCode1
Self-supervised Geometric Features Discovery via Interpretable Attention for Vehicle Re-Identification and BeyondCode0
CLAR: Contrastive Learning of Auditory Representations0
SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks0
From Local Structures to Size Generalization in Graph Neural Networks0
On the surprising similarities between supervised and self-supervised models0
Unsupervised Natural Language Inference via Decoupled Multimodal Contrastive LearningCode0
For self-supervised learning, Rationality implies generalization, provablyCode0
Response Selection for Multi-Party Conversations withDynamic Topic Tracking0
Representation Learning via Invariant Causal MechanismsCode1
Self-Supervised Ranking for Representation Learning0
Are all negatives created equal in contrastive instance discrimination?0
Measuring Visual Generalization in Continuous Control from PixelsCode1
SAR: Scale-Aware Restoration Learning for 3D Tumor Segmentation0
Audio-Visual Self-Supervised Terrain Type Discovery for Mobile Platforms0
MixCo: Mix-up Contrastive Learning for Visual RepresentationCode1
Self-Supervised Multi-View Synchronization Learning for 3D Pose Estimation0
MS^2L: Multi-Task Self-Supervised Learning for Skeleton Based Action RecognitionCode1
Rethinking supervised learning: insights from biological learning and from calling it by its name0
Learning 3D Face Reconstruction with a Pose Guidance Network0
A Human Ear Reconstruction Autoencoder0
Pathological Visual Question Answering0
Show:102550
← PrevPage 92 of 101Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pretraining: NoneImages & Text57.5Unverified
2Pretraining: ShEDImages & Text54.3Unverified
3Pretraining: e-MixImages & Text48.9Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50Accuracy91.7Unverified
2ResNet18Accuracy91.02Unverified
3MV-MRAccuracy89.67Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy93.89Unverified
2ResNet18average top-1 classification accuracy92.58Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy72.51Unverified
2ResNet18average top-1 classification accuracy69.31Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy82.64Unverified
2CorInfomax (ResNet18)Top-1 Accuracy80.48Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy51.84Unverified
2ResNet18average top-1 classification accuracy51.67Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy93.18Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy71.61Unverified
#ModelMetricClaimedVerifiedStatus
1Hybrid BYOL-S/CvTAccuracy67.2Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy54.86Unverified