SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 151175 of 265 papers

TitleStatusHype
Spiral Contrastive Learning: An Efficient 3D Representation Learning Method for Unannotated CT Lesions0
Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning0
Multi-Modal Unsupervised Pre-Training for Surgical Operating Room Workflow Analysis0
Unsupervised Instance Discriminative Learning for Depression Detection from Speech Signals0
Contextual embedding and model weighting by fusing domain knowledge on Biomedical Question AnsweringCode0
Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text GenerationCode0
Deep Features for CBIR with Scarce Data using Hebbian Learning0
COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasetsCode0
Boundary-aware Information Maximization for Self-supervised Medical Image Segmentation0
3D Intracranial Aneurysm Classification and Segmentation via Unsupervised Dual-branch Learning0
Unleashing Potential of Unsupervised Pre-Training With Intra-Identity Regularization for Person Re-Identification0
4DContrast: Contrastive Learning with Dynamic Correspondences for 3D Scene Understanding0
Unleashing the Potential of Unsupervised Pre-Training with Intra-Identity Regularization for Person Re-IdentificationCode0
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment0
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection MethodCode0
SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training0
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference0
Triplet Contrastive Learning for Brain Tumor Classification0
Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images0
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention0
Learning of feature points without additional supervision improves reinforcement learning from imagesCode0
Automatic Sexism Detection with Multilingual Transformer Models0
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language ProcessingCode0
Audio Transformers0
A Large-Scale Study on Unsupervised Spatiotemporal Representation LearningCode0
Show:102550
← PrevPage 7 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified