SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 231240 of 265 papers

TitleStatusHype
COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasetsCode0
An Analysis of Unsupervised Pre-training in Light of Recent AdvancesCode0
RMDL: Random Multimodel Deep Learning for ClassificationCode0
Unsupervised Pre-Training of Image Features on Non-Curated DataCode0
Learning Deep Representations Using Convolutional Auto-encoders with Symmetric Skip ConnectionsCode0
LATTE: Label-efficient Incident Phenotyping from Longitudinal Electronic Health RecordsCode0
Knowledge Matters: Importance of Prior Information for OptimizationCode0
Improving Relation Extraction by Pre-trained Language RepresentationsCode0
Tuning Multilingual Transformers for Language-Specific Named Entity RecognitionCode0
Self-Supervised Modality-Agnostic Pre-Training of Swin TransformersCode0
Show:102550
← PrevPage 24 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified