SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 8190 of 265 papers

TitleStatusHype
The Efficiency of Pre-training with Objective Masking in Pseudo Labeling for Semi-Supervised Text Classification0
Latte: Transfering LLMs` Latent-level Knowledge for Few-shot Tabular Learning0
Risk Assessment Framework for Code LLMs via Leveraging Internal States0
ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object SegmentationCode0
ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object Segmentation with Zero-Shot MethodCode0
How much do LLMs learn from negative examples?Code0
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation0
Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models0
Targeted Forgetting of Image Subgroups in CLIP Models0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
Show:102550
← PrevPage 9 of 27Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified