SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 126150 of 265 papers

TitleStatusHype
Bag of Tricks and A Strong baseline for Image Copy DetectionCode1
D^2LV: A Data-Driven and Local-Verification Approach for Image Copy DetectionCode1
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection MethodCode0
Improving Abstractive Dialogue Summarization with Hierarchical Pretraining and Topic Segment0
SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text Joint Pre-Training0
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech RecognitionCode1
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference0
RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object DetectionCode1
Triplet Contrastive Learning for Brain Tumor Classification0
Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images0
Improving On-Screen Sound Separation for Open-Domain Videos with Audio-Visual Self-Attention0
Learning of feature points without additional supervision improves reinforcement learning from imagesCode0
Automatic Sexism Detection with Multilingual Transformer Models0
PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-trainingCode1
Exploring the Limits of Out-of-Distribution DetectionCode1
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language ProcessingCode0
Initialization and Regularization of Factorized Neural LayersCode1
Audio Transformers0
SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics0
A Large-Scale Study on Unsupervised Spatiotemporal Representation LearningCode0
Representation Learning for Weakly Supervised Relation Extraction0
Patient Contrastive Learning: a Performant, Expressive, and Practical Approach to ECG ModelingCode1
On Architectures and Training for Raw Waveform Feature Extraction in ASR0
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models0
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing DataCode1
Show:102550
← PrevPage 6 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified