SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 125 of 265 papers

TitleStatusHype
Dynamic data sampler for cross-language transfer learning in large language modelsCode7
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation ModelsCode5
DepthSplat: Connecting Gaussian Splatting and DepthCode5
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCICode4
A Survey on Data Selection for Language ModelsCode3
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation LearningCode2
Foundation Policies with Hilbert RepresentationsCode2
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite ImageryCode2
Large-Scale Pre-training for Person Re-identification with Noisy LabelsCode2
SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation ModelCode1
PersonViT: Large-scale Self-supervised Vision Transformer for Person Re-IdentificationCode1
ConStyle v2: A Strong Prompter for All-in-One Image RestorationCode1
PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement LearningCode1
BMRetriever: Tuning Large Language Models as Better Biomedical Text RetrieversCode1
Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage RetrievalCode1
Unified Multi-modal Unsupervised Representation Learning for Skeleton-based Action UnderstandingCode1
METRA: Scalable Unsupervised RL with Metric-Aware AbstractionCode1
HIQL: Offline Goal-Conditioned RL with Latent States as ActionsCode1
Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement LearningCode1
Rethinking Semi-supervised Learning with Language ModelsCode1
PTGB: Pre-Train Graph Neural Networks for Brain Network AnalysisCode1
FreePoint: Unsupervised Point Cloud Instance SegmentationCode1
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful LearnerCode1
Unsupervised Pre-Training For Data-Efficient Text-to-Speech On Low Resource LanguagesCode1
Show:102550
← PrevPage 1 of 11Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified