SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 201250 of 265 papers

TitleStatusHype
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment0
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
Incorporating Unlabelled Data into Bayesian Neural Networks0
Is MixIT Really Unsuitable for Correlated Sources? Exploring MixIT for Unsupervised Pre-training in Music Source Separation0
Large Language Model Enabled Semantic Communication Systems0
Latte: Transfering LLMs` Latent-level Knowledge for Few-shot Tabular Learning0
Learning Discriminative Features with Class Encoder0
Learning Non-Linear Reconstruction Models for Image Set Classification0
Learning Unsupervised Gaze Representation via Eye Mask Driven Information Bottleneck0
Leveraging Random Label Memorization for Unsupervised Pre-Training0
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning0
Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech0
Masked Feature Modelling: Feature Masking for the Unsupervised Pre-training of a Graph Attention Network Block for Bottom-up Video Event Recognition0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
Measles Rash Identification Using Residual Deep Convolutional Neural Network0
MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning0
Multi-Modal Unsupervised Pre-Training for Surgical Operating Room Workflow Analysis0
Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition0
On the Generalization Ability of Unsupervised Pretraining0
Unsupervised Deep Representation Learning and Few-Shot Classification of PolSAR Images0
Point Cloud Unsupervised Pre-training via 3D Gaussian Splatting0
Post-training for Deep Learning0
Pre-Training and Fine-Tuning Generative Flow Networks0
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation0
Pre-training Text Representations as Meta Learning0
Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models0
Range-aware Positional Encoding via High-order Pretraining: Theory and Practice0
Recognizing UMLS Semantic Types with Deep Learning0
Representation Learning for Weakly Supervised Relation Extraction0
Research on CPI Prediction Based on Natural Language Processing0
Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images0
Risk Assessment Framework for Code LLMs via Leveraging Internal States0
R-LAtte: Attention Module for Visual Control via Reinforcement Learning0
Self-FuseNet: Data Free Unsupervised Remote Sensing Image Super-Resolution0
Self-Supervised Relative Depth Learning for Urban Scene Understanding0
Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning0
Semi-Supervised End-To-End Contrastive Learning For Time Series Classification0
Semi-Supervised Semantic Segmentation of Cell Nuclei via Diffusion-based Large-Scale Pre-Training and Collaborative Learning0
SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training0
SMiRL: Surprise Minimizing RL in Entropic Environments0
Spiral Contrastive Learning: An Efficient 3D Representation Learning Method for Unannotated CT Lesions0
SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics0
Synthetic vascular structure generation for unsupervised pre-training in CTA segmentation tasks0
Targeted Forgetting of Image Subgroups in CLIP Models0
Temporal Autoencoding Improves Generative Models of Time Series0
Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation0
The Efficiency of Pre-training with Objective Masking in Pseudo Labeling for Semi-Supervised Text Classification0
Towards General Text Embeddings with Multi-stage Contrastive Learning0
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training0
Show:102550
← PrevPage 5 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified