SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 251265 of 265 papers

TitleStatusHype
Learning Deep Representations Using Convolutional Auto-encoders with Symmetric Skip ConnectionsCode0
Unsupervised Learning with Truncated Gaussian Graphical Models0
Post Training in Deep Learning with Last KernelCode0
Adversarial Ladder Networks0
What is the Best Feature Learning Procedure in Hierarchical Recognition Architectures?0
Learning Discriminative Features with Class Encoder0
Faster learning of deep stacked autoencoders on multi-core systems using synchronized layer-wise pre-training0
Unsupervised Deep Feature Extraction for Remote Sensing Image Classification0
Data-dependent Initializations of Convolutional Neural NetworksCode0
How far can we go without convolution: Improving fully-connected networksCode0
Convergence of gradient based pre-training in Denoising autoencoders0
An Analysis of Unsupervised Pre-training in Light of Recent AdvancesCode0
Learning Non-Linear Reconstruction Models for Image Set Classification0
Temporal Autoencoding Improves Generative Models of Time Series0
Knowledge Matters: Importance of Prior Information for OptimizationCode0
Show:102550
← PrevPage 6 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified