SOTAVerified

Self-Supervised Learning

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Papers

Showing 11511200 of 5044 papers

TitleStatusHype
Self-Supervised Learning for Fine-Grained Visual CategorizationCode1
Masked Contrastive Learning for Anomaly DetectionCode1
Mean Shift for Self-Supervised LearningCode1
Window-Level is a Strong Denoising SurrogateCode1
Waste detection in Pomerania: non-profit project for detecting waste in environmentCode1
Electrocardio Panorama: Synthesizing New ECG Views with Self-supervisionCode1
Semantic Distribution-aware Contrastive Adaptation for Semantic SegmentationCode1
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised LearningCode1
Self-Supervised Learning with Swin TransformersCode1
Salient Objects in ClutterCode1
Self-Supervised Multi-Frame Monocular Scene FlowCode1
Self-Supervised Learning from Automatically Separated Sound ScenesCode1
SUPERB: Speech processing Universal PERformance BenchmarkCode1
On Feature Decorrelation in Self-Supervised LearningCode1
Emerging Properties in Self-Supervised Vision TransformersCode1
A Note on Connecting Barlow Twins with Negative-Sample-Free Contrastive LearningCode1
Self-supervised Spatial Reasoning on Multi-View Line DrawingsCode1
Towards Good Practices for Efficiently Annotating Large-Scale Image Classification DatasetsCode1
Multimodal Clustering Networks for Self-supervised Learning from Unlabeled VideosCode1
LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from SpeechCode1
VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and TextCode1
Distill on the Go: Online knowledge distillation in self-supervised learningCode1
Generative Transformer for Accurate and Reliable Salient Object DetectionCode1
DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive LearningCode1
When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD DatasetCode1
Solving Inefficiency of Self-supervised Representation LearningCode1
A Surface Geometry Model for LiDAR Depth CompletionCode1
Contrastive Learning with Stronger AugmentationsCode1
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-CommerceCode1
Self-Supervised Learning of Remote Sensing Scene Representations Using Contrastive Multiview CodingCode1
Towards Fine-grained Visual Representations by Combining Contrastive Learning with Image Reconstruction and Attention-weighted PoolingCode1
Speech Denoising Without Clean Training Data: A Noise2Noise ApproachCode1
CutPaste: Self-Supervised Learning for Anomaly Detection and LocalizationCode1
CoCoNets: Continuous Contrastive 3D Scene RepresentationsCode1
SiT: Self-supervised vIsion TransformerCode1
Self-supervised Learning of Depth Inference for Multi-view StereoCode1
S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained RepresentationsCode1
Self-Supervised Learning for Semi-Supervised Temporal Action ProposalCode1
An Empirical Study of Training Self-Supervised Vision TransformersCode1
The Spatially-Correlative Loss for Various Image Translation TasksCode1
LaPred: Lane-Aware Prediction of Multi-Modal Future Trajectories of Dynamic AgentsCode1
Self-supervised learning for tool wear monitoring with a disentangled-variational-autoencoderCode1
Prototypical Cross-domain Self-supervised Learning for Few-shot Unsupervised Domain AdaptationCode1
Neural Transformation Learning for Deep Anomaly Detection Beyond ImagesCode1
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing DataCode1
Broaden Your Views for Self-Supervised Video LearningCode1
Self-supervised Graph Neural Networks without explicit negative samplingCode1
Quantum Self-Supervised LearningCode1
Rethinking Self-Supervised Learning: Small is BeautifulCode1
Vectorization and Rasterization: Self-Supervised Learning for Sketch and HandwritingCode1
Show:102550
← PrevPage 24 of 101Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Pretraining: NoneImages & Text57.5Unverified
2Pretraining: ShEDImages & Text54.3Unverified
3Pretraining: e-MixImages & Text48.9Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50Accuracy91.7Unverified
2ResNet18Accuracy91.02Unverified
3MV-MRAccuracy89.67Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy93.89Unverified
2ResNet18average top-1 classification accuracy92.58Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy72.51Unverified
2ResNet18average top-1 classification accuracy69.31Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy82.64Unverified
2CorInfomax (ResNet18)Top-1 Accuracy80.48Unverified
#ModelMetricClaimedVerifiedStatus
1ResNet50average top-1 classification accuracy51.84Unverified
2ResNet18average top-1 classification accuracy51.67Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy93.18Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet18)Top-1 Accuracy71.61Unverified
#ModelMetricClaimedVerifiedStatus
1Hybrid BYOL-S/CvTAccuracy67.2Unverified
#ModelMetricClaimedVerifiedStatus
1CorInfomax (ResNet50)Top-1 Accuracy54.86Unverified