SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 176200 of 265 papers

TitleStatusHype
Accelerating Training of Transformer-Based Language Models with Progressive Layer DroppingCode0
Unsupervised Vision-and-Language Pre-training Without Parallel Images and CaptionsCode1
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method0
Pre-training Graph Transformer with Multimodal Side Information for Recommendation0
Self-training and Pre-training are Complementary for Speech RecognitionCode0
Corruption Is Not All Bad: Incorporating Discourse Structure into Pre-training via Corruption for Essay Scoring0
A Transformer-based Framework for Multivariate Time Series Representation LearningCode1
Self-training Improves Pre-training for Natural Language UnderstandingCode1
Unsupervised Pre-training for Biomedical Question Answering0
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model0
m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural NetworksCode0
Unsupervised Learning For Sequence-to-sequence Text-to-speech For Low-resource Languages0
Spatiotemporal Contrastive Video Representation LearningCode1
Functional Regularization for Representation Learning: A Unified Theoretical PerspectiveCode0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
SeCo: Exploring Sequence Supervision for Unsupervised Representation LearningCode1
PointContrast: Unsupervised Pre-training for 3D Point Cloud UnderstandingCode1
Unsupervised Deep Representation Learning and Few-Shot Classification of PolSAR Images0
What Makes for Good Views for Contrastive Learning?0
A Further Study of Unsupervised Pre-training for Transformer Based Speech RecognitionCode1
Measles Rash Identification Using Residual Deep Convolutional Neural Network0
Rolling-Unrolling LSTMs for Action Anticipation from First-Person VideoCode1
TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction TaskCode1
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning0
Pre-training Text Representations as Meta Learning0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified