SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 201225 of 265 papers

TitleStatusHype
Unsupervised Pre-trained, Texture Aware And Lightweight Model for Deep Learning-Based Iris Recognition Under Limited Annotated Data0
Unsupervised pre-training for sequence to sequence speech recognition0
Unsupervised Pre-Training for 3D Leaf Instance Segmentation0
Unsupervised Pre-training for Biomedical Question Answering0
Unsupervised Pre-training for Natural Language Generation: A Literature Review0
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors0
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project0
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project0
Unsupervised pre-training helps to conserve views from input distribution0
Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis0
Unsupervised Pre-training With Seq2Seq Reconstruction Loss for Deep Relation Extraction Models0
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference0
VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
What is the Best Feature Learning Procedure in Hierarchical Recognition Architectures?0
What Makes for Good Views for Contrastive Learning?0
Range-aware Positional Encoding via High-order Pretraining: Theory and Practice0
Recognizing UMLS Semantic Types with Deep Learning0
Representation Learning for Weakly Supervised Relation Extraction0
Pre-train and Learn: Preserve Global Information for Graph Neural NetworksCode0
PUNR: Pre-training with User Behavior Modeling for News RecommendationCode0
Post Training in Deep Learning with Last KernelCode0
Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic DataCode0
Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text GenerationCode0
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document UnderstandingCode0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified