SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 101150 of 265 papers

TitleStatusHype
Bridging the domain gap in cross-lingual document classificationCode0
GiBERT: Enhancing BERT with Linguistic Information using a Lightweight Gated Injection MethodCode0
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual RepresentationsCode0
Calibrating Language Models with Adaptive Temperature ScalingCode0
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language ProcessingCode0
Take Package as Language: Anomaly Detection Using TransformerCode0
How far can we go without convolution: Improving fully-connected networksCode0
How much do LLMs learn from negative examples?Code0
Improving Relation Extraction by Pre-trained Language RepresentationsCode0
Tuning Multilingual Transformers for Language-Specific Named Entity RecognitionCode0
Tuning Multilingual Transformers for Named Entity Recognition on Slavic LanguagesCode0
Accelerating Training of Transformer-Based Language Models with Progressive Layer DroppingCode0
Unleashing the Potential of Unsupervised Pre-Training with Intra-Identity Regularization for Person Re-IdentificationCode0
From Recognition to Prediction: Leveraging Sequence Reasoning for Action AnticipationCode0
Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human FeedbackCode0
Knowledge Matters: Importance of Prior Information for OptimizationCode0
LATTE: Label-efficient Incident Phenotyping from Longitudinal Electronic Health RecordsCode0
Learning Deep Representations Using Convolutional Auto-encoders with Symmetric Skip ConnectionsCode0
CochCeps-Augment: A Novel Self-Supervised Contrastive Learning Using Cochlear Cepstrum-based Masking for Speech Emotion RecognitionCode0
Unsupervised Pre-Training of Image Features on Non-Curated DataCode0
Unsupervised Pre-training with Language-Vision Prompts for Low-Data Instance SegmentationCode0
Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent AgentsCode0
VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical DomainCode0
COLA: COarse LAbel pre-training for 3D semantic segmentation of sparse LiDAR datasetsCode0
m2caiSeg: Semantic Segmentation of Laparoscopic Images using Convolutional Neural NetworksCode0
What Makes for Good Views for Contrastive Learning?Code0
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document UnderstandingCode0
ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object Segmentation with Zero-Shot MethodCode0
Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors0
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project0
Unsupervised Pre-Training for Vietnamese Automatic Speech Recognition in the HYKIST Project0
Unsupervised pre-training helps to conserve views from input distribution0
Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis0
Unsupervised Pre-training With Seq2Seq Reconstruction Loss for Deep Relation Extraction Models0
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference0
Weakly Supervised Construction of ASR Systems with Massive Video Data0
What is the Best Feature Learning Procedure in Hierarchical Recognition Architectures?0
Unsupervised Deep Feature Extraction for Remote Sensing Image Classification0
3D Intracranial Aneurysm Classification and Segmentation via Unsupervised Dual-branch Learning0
4DContrast: Contrastive Learning with Dynamic Correspondences for 3D Scene Understanding0
A Benchmark of Nested Named Entity Recognition Approaches in Historical Structured Documents0
A Brief History of Prompt: Leveraging Language Models. (Through Advanced Prompting)0
A Brief Summary of Interactions Between Meta-Learning and Self-Supervised Learning0
ACROBAT -- a multi-stain breast cancer histological whole-slide-image data set from routine diagnostics for computational pathology0
Adversarial Ladder Networks0
An Investigation of Noise Robustness for Flow-Matching-Based Zero-Shot TTS0
A Pitfall of Unsupervised Pre-Training0
A Pitfall of Unsupervised Pre-Training0
Pre-training Graph Transformer with Multimodal Side Information for Recommendation0
AT-BERT: Adversarial Training BERT for Acronym Identification Winning Solution for SDU@AAAI-210
Show:102550
← PrevPage 3 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified