SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 201225 of 265 papers

TitleStatusHype
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Foundation Model for Wireless Technology Recognition Using IQ Timeseries0
GFlowNet Pretraining with Inexpensive Rewards0
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method0
HindiLLM: Large Language Model for Hindi0
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment0
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
Incorporating Unlabelled Data into Bayesian Neural Networks0
Is MixIT Really Unsuitable for Correlated Sources? Exploring MixIT for Unsupervised Pre-training in Music Source Separation0
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback0
Large Language Model Enabled Semantic Communication Systems0
Latte: Transfering LLMs` Latent-level Knowledge for Few-shot Tabular Learning0
Learning Discriminative Features with Class Encoder0
Learning Non-Linear Reconstruction Models for Image Set Classification0
Learning Unsupervised Gaze Representation via Eye Mask Driven Information Bottleneck0
Leveraging Random Label Memorization for Unsupervised Pre-Training0
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning0
Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech0
Masked Feature Modelling: Feature Masking for the Unsupervised Pre-training of a Graph Attention Network Block for Bottom-up Video Event Recognition0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
Measles Rash Identification Using Residual Deep Convolutional Neural Network0
MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning0
Multi-Modal Unsupervised Pre-Training for Surgical Operating Room Workflow Analysis0
Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition0
Show:102550
← PrevPage 9 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified