SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 101125 of 265 papers

TitleStatusHype
Convergence of gradient based pre-training in Denoising autoencoders0
Leveraging Random Label Memorization for Unsupervised Pre-Training0
GFlowNet Pretraining with Inexpensive Rewards0
Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning0
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method0
Cross-Domain Training for Goal-Oriented Conversational Agents0
Foundation Model for Wireless Technology Recognition Using IQ Timeseries0
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Faster learning of deep stacked autoencoders on multi-core systems using synchronized layer-wise pre-training0
Author2Vec: A Framework for Generating User Embedding0
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning0
Measles Rash Identification Using Residual Deep Convolutional Neural Network0
On the Generalization Ability of Unsupervised Pretraining0
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs0
Generalized 3D Self-supervised Learning Framework via Prompted Foreground-Aware Feature Contrast0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
Co-Morbidity Exploration on Wearables Activity Data Using Unsupervised Pre-training and Multi-Task Learning0
Extractive NarrativeQA with Heuristic Pre-Training0
Extracting UMLS Concepts from Medical Text Using General and Domain-Specific Deep Learning Models0
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback0
Deep Discriminative Model for Video Classification0
Combining Unsupervised Pre-training and Annotator Rationales to Improve Low-shot Text Classification0
Audio Transformers0
Large Language Model Enabled Semantic Communication Systems0
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts0
Show:102550
← PrevPage 5 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified