SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 1120 of 265 papers

TitleStatusHype
How much do LLMs learn from negative examples?Code0
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation0
Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models0
Targeted Forgetting of Image Subgroups in CLIP Models0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
HindiLLM: Large Language Model for Hindi0
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation LearningCode2
Transferring self-supervised pre-trained models for SHM data anomaly detection with scarce labeled data0
CLAP: Unsupervised 3D Representation Learning for Fusion 3D Perception via Curvature Sampling and Prototype Learning0
Point Cloud Unsupervised Pre-training via 3D Gaussian Splatting0
Show:102550
← PrevPage 2 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified