SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 176200 of 265 papers

TitleStatusHype
Deep Belief Networks Based Feature Generation and Regression for Predicting Wind Power0
Deep Discriminative Model for Video Classification0
Deep Features for CBIR with Scarce Data using Hebbian Learning0
Device Tuning for Multi-Task Large Model0
Discovery of Visual Semantics by Unsupervised and Self-Supervised Representation Learning0
DiscrimNet: Semi-Supervised Action Recognition from Videos using Generative Adversarial Networks0
Disentangling Node Attributes from Graph Topology for Improved Generalizability in Link Prediction0
DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction0
Differentially Private Optimization for Non-Decomposable Objective Functions0
ECGBERT: Understanding Hidden Language of ECGs with Self-Supervised Representation Learning0
Effective training of deep convolutional neural networks for hyperspectral image classification through artificial labeling0
Empirical Evaluation of Active Learning Techniques for Neural MT0
Enhance Visual Recognition under Adverse Conditions via Deep Networks0
Enhancing the vocal range of single-speaker singing voice synthesis with melody-unsupervised pre-training0
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model0
EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model0
Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning0
Examining the Effect of Pre-training on Time Series Classification0
Exploiting Unsupervised Pre-training and Automated Feature Engineering for Low-resource Hate Speech Detection in Polish0
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts0
Extracting UMLS Concepts from Medical Text Using General and Domain-Specific Deep Learning Models0
Extractive NarrativeQA with Heuristic Pre-Training0
Generalized 3D Self-supervised Learning Framework via Prompted Foreground-Aware Feature Contrast0
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs0
Faster learning of deep stacked autoencoders on multi-core systems using synchronized layer-wise pre-training0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified