SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 201250 of 265 papers

TitleStatusHype
Unsupervised Learning with Truncated Gaussian Graphical Models0
Unsupervised Pre-trained, Texture Aware And Lightweight Model for Deep Learning-Based Iris Recognition Under Limited Annotated Data0
Unsupervised pre-training for sequence to sequence speech recognition0
Unsupervised Pre-Training for 3D Leaf Instance Segmentation0
Unsupervised Pre-training for Biomedical Question Answering0
Unsupervised Pre-training for Natural Language Generation: A Literature Review0
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors0
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project0
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project0
Unsupervised pre-training helps to conserve views from input distribution0
Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis0
Unsupervised Pre-training With Seq2Seq Reconstruction Loss for Deep Relation Extraction Models0
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
What is the Best Feature Learning Procedure in Hierarchical Recognition Architectures?0
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document UnderstandingCode0
Pre-train and Learn: Preserve Global Information for Graph Neural NetworksCode0
Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text GenerationCode0
Post Training in Deep Learning with Last KernelCode0
An Analysis of Unsupervised Pre-training in Light of Recent AdvancesCode0
Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic DataCode0
PUNR: Pre-training with User Behavior Modeling for News RecommendationCode0
MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language ModelsCode0
m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural NetworksCode0
Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent AgentsCode0
COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasetsCode0
Unsupervised Pre-Training of Image Features on Non-Curated DataCode0
A Large-Scale Study on Unsupervised Spatiotemporal Representation LearningCode0
VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical DomainCode0
CochCeps-Augment: A Novel Self-Supervised Contrastive Learning Using Cochlear Cepstrum-based Masking for Speech Emotion RecognitionCode0
Learning Deep Representations Using Convolutional Auto-encoders with Symmetric Skip ConnectionsCode0
Tuning Multilingual Transformers for Language-Specific Named Entity RecognitionCode0
Tuning Multilingual Transformers for Named Entity Recognition on Slavic LanguagesCode0
RMDL: Random Multimodel Deep Learning for ClassificationCode0
LATTE: Label-efficient Incident Phenotyping from Longitudinal Electronic Health RecordsCode0
Knowledge Matters: Importance of Prior Information for OptimizationCode0
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human FeedbackCode0
Improving Relation Extraction by Pre-trained Language RepresentationsCode0
How much do LLMs learn from negative examples?Code0
Calibrating Language Models with Adaptive Temperature ScalingCode0
Self-Supervised Modality-Agnostic Pre-Training of Swin TransformersCode0
Self-Supervised Pre-Training Boosts Semantic Scene Segmentation on LiDAR DataCode0
Advancing PICO Element Detection in Biomedical Text via Deep Neural NetworksCode0
How far can we go without convolution: Improving fully-connected networksCode0
Self-training and Pre-training are Complementary for Speech RecognitionCode0
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language ProcessingCode0
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection MethodCode0
Contextual embedding and model weighting by fusing domain knowledge on Biomedical Question AnsweringCode0
From Recognition to Prediction: Leveraging Sequence Reasoning for Action AnticipationCode0
ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object Segmentation with Zero-Shot MethodCode0
Show:102550
← PrevPage 5 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified