SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 176200 of 265 papers

TitleStatusHype
SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics0
Representation Learning for Weakly Supervised Relation Extraction0
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors0
A Brief Summary of Interactions Between Meta-Learning and Self-Supervised Learning0
Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning0
Bi-APC: Bidirectional Autoregressive Predictive Coding for Unsupervised Pre-training and Its Application to Children's ASR0
AT-BERT: Adversarial Training BERT for Acronym Identification Winning Solution for SDU@AAAI-210
R-LAtte: Attention Module for Visual Control via Reinforcement Learning0
Unsupervised Active Pre-Training for Reinforcement Learning0
Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech0
Bi-tuning of Pre-trained Representations0
Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning0
Accelerating Training of Transformer-Based Language Models with Progressive Layer DroppingCode0
Pre-training Graph Transformer with Multimodal Side Information for Recommendation0
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method0
Self-training and Pre-training are Complementary for Speech RecognitionCode0
Corruption Is Not All Bad: Incorporating Discourse Structure into Pre-training via Corruption for Essay Scoring0
Unsupervised Pre-training for Biomedical Question Answering0
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model0
m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural NetworksCode0
Unsupervised Learning For Sequence-to-sequence Text-to-speech For Low-resource Languages0
Functional Regularization for Representation Learning: A Unified Theoretical PerspectiveCode0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
Show:102550
← PrevPage 8 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified