SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 201210 of 265 papers

TitleStatusHype
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Foundation Model for Wireless Technology Recognition Using IQ Timeseries0
GFlowNet Pretraining with Inexpensive Rewards0
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method0
HindiLLM: Large Language Model for Hindi0
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment0
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
Incorporating Unlabelled Data into Bayesian Neural Networks0
Is MixIT Really Unsuitable for Correlated Sources? Exploring MixIT for Unsupervised Pre-training in Music Source Separation0
Show:102550
← PrevPage 21 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified