SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 171180 of 265 papers

TitleStatusHype
Learning of feature points without additional supervision improves reinforcement learning from imagesCode0
Automatic Sexism Detection with Multilingual Transformer Models0
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language ProcessingCode0
Audio Transformers0
A Large-Scale Study on Unsupervised Spatiotemporal Representation LearningCode0
SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics0
Representation Learning for Weakly Supervised Relation Extraction0
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors0
Show:102550
← PrevPage 18 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified