SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 211220 of 265 papers

TitleStatusHype
SMILES Transformer: Pre-trained Molecular Fingerprint for Low Data Drug DiscoveryCode0
MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language ModelsCode0
Recognizing UMLS Semantic Types with Deep Learning0
Extractive NarrativeQA with Heuristic Pre-Training0
Empirical Evaluation of Active Learning Techniques for Neural MT0
Combining Unsupervised Pre-training and Annotator Rationales to Improve Low-shot Text Classification0
Unsupervised pre-training for sequence to sequence speech recognition0
Pre-train and Learn: Preserve Global Information for Graph Neural NetworksCode0
Extracting UMLS Concepts from Medical Text Using General and Domain-Specific Deep Learning Models0
SMiRL: Surprise Minimizing RL in Entropic Environments0
Show:102550
← PrevPage 22 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified