SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 151200 of 265 papers

TitleStatusHype
Spiral Contrastive Learning: An Efficient 3D Representation Learning Method for Unannotated CT Lesions0
Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning0
Multi-Modal Unsupervised Pre-Training for Surgical Operating Room Workflow Analysis0
Unsupervised Instance Discriminative Learning for Depression Detection from Speech Signals0
Contextual embedding and model weighting by fusing domain knowledge on Biomedical Question AnsweringCode0
Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text GenerationCode0
Deep Features for CBIR with Scarce Data using Hebbian Learning0
COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasetsCode0
Boundary-aware Information Maximization for Self-supervised Medical Image Segmentation0
3D Intracranial Aneurysm Classification and Segmentation via Unsupervised Dual-branch Learning0
Unleashing Potential of Unsupervised Pre-Training With Intra-Identity Regularization for Person Re-Identification0
4DContrast: Contrastive Learning with Dynamic Correspondences for 3D Scene Understanding0
Unleashing the Potential of Unsupervised Pre-Training with Intra-Identity Regularization for Person Re-IdentificationCode0
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment0
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection MethodCode0
SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training0
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference0
Triplet Contrastive Learning for Brain Tumor Classification0
Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images0
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention0
Learning of feature points without additional supervision improves reinforcement learning from imagesCode0
Automatic Sexism Detection with Multilingual Transformer Models0
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language ProcessingCode0
Audio Transformers0
A Large-Scale Study on Unsupervised Spatiotemporal Representation LearningCode0
SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics0
Representation Learning for Weakly Supervised Relation Extraction0
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors0
A Brief Summary of Interactions Between Meta-Learning and Self-Supervised Learning0
Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning0
Bi-APC: Bidirectional Autoregressive Predictive Coding for Unsupervised Pre-training and Its Application to Children's ASR0
AT-BERT: Adversarial Training BERT for Acronym Identification Winning Solution for SDU@AAAI-210
R-LAtte: Attention Module for Visual Control via Reinforcement Learning0
Unsupervised Active Pre-Training for Reinforcement Learning0
Machine Translation Pre-training for Data-to-Text Generation - A Case Study in Czech0
Bi-tuning of Pre-trained Representations0
Semi-supervised Facial Action Unit Intensity Estimation with Contrastive Learning0
Accelerating Training of Transformer-Based Language Models with Progressive Layer DroppingCode0
Pre-training Graph Transformer with Multimodal Side Information for Recommendation0
GiBERT: Introducing Linguistic Knowledge into BERT through a Lightweight Gated Injection Method0
Self-training and Pre-training are Complementary for Speech RecognitionCode0
Corruption Is Not All Bad: Incorporating Discourse Structure into Pre-training via Corruption for Essay Scoring0
Unsupervised Pre-training for Biomedical Question Answering0
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model0
m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural NetworksCode0
Unsupervised Learning For Sequence-to-sequence Text-to-speech For Low-resource Languages0
Functional Regularization for Representation Learning: A Unified Theoretical PerspectiveCode0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
Show:102550
← PrevPage 4 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified