SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 110 of 265 papers

TitleStatusHype
Dynamic data sampler for cross-language transfer learning in large language modelsCode7
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation ModelsCode5
DepthSplat: Connecting Gaussian Splatting and DepthCode5
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCICode4
A Survey on Data Selection for Language ModelsCode3
Large-Scale Pre-training for Person Re-identification with Noisy LabelsCode2
Foundation Policies with Hilbert RepresentationsCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation LearningCode2
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite ImageryCode2
Show:102550
← PrevPage 1 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified