SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 7180 of 265 papers

TitleStatusHype
A Further Study of Unsupervised Pre-training for Transformer Based Speech RecognitionCode1
Rolling-Unrolling LSTMs for Action Anticipation from First-Person VideoCode1
TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction TaskCode1
Improving Transformer-based Speech Recognition Using Unsupervised Pre-trainingCode1
Leveraging Pre-trained Checkpoints for Sequence Generation TasksCode1
wav2vec: Unsupervised Pre-training for Speech RecognitionCode1
Multilingual Constituency Parsing with Self-Attention and Pre-TrainingCode1
Exact solutions to the nonlinear dynamics of learning in deep linear neural networksCode1
Foundation Model for Wireless Technology Recognition Using IQ Timeseries0
Is MixIT Really Unsuitable for Correlated Sources? Exploring MixIT for Unsupervised Pre-training in Music Source Separation0
Show:102550
← PrevPage 8 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified