SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 150 of 265 papers

TitleStatusHype
Dynamic data sampler for cross-language transfer learning in large language modelsCode7
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation ModelsCode5
DepthSplat: Connecting Gaussian Splatting and DepthCode5
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCICode4
A Survey on Data Selection for Language ModelsCode3
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation LearningCode2
Foundation Policies with Hilbert RepresentationsCode2
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite ImageryCode2
Large-Scale Pre-training for Person Re-identification with Noisy LabelsCode2
SPACE: Your Genomic Profile Predictor is a Powerful DNA Foundation ModelCode1
PersonViT: Large-scale Self-supervised Vision Transformer for Person Re-IdentificationCode1
ConStyle v2: A Strong Prompter for All-in-One Image RestorationCode1
PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement LearningCode1
BMRetriever: Tuning Large Language Models as Better Biomedical Text RetrieversCode1
Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage RetrievalCode1
Unified Multi-modal Unsupervised Representation Learning for Skeleton-based Action UnderstandingCode1
METRA: Scalable Unsupervised RL with Metric-Aware AbstractionCode1
HIQL: Offline Goal-Conditioned RL with Latent States as ActionsCode1
Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement LearningCode1
Rethinking Semi-supervised Learning with Language ModelsCode1
PTGB: Pre-Train Graph Neural Networks for Brain Network AnalysisCode1
FreePoint: Unsupervised Point Cloud Instance SegmentationCode1
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful LearnerCode1
Unsupervised Pre-Training For Data-Efficient Text-to-Speech On Low Resource LanguagesCode1
MultiTalent: A Multi-Dataset Approach to Medical Image SegmentationCode1
DocILE Benchmark for Document Information Localization and ExtractionCode1
ProposalContrast: Unsupervised Pre-training for LiDAR-based 3D Object DetectionCode1
Unsupervised pre-training of graph transformers on patient population graphsCode1
CARLANE: A Lane Detection Benchmark for Unsupervised Domain Adaptation from Simulation to multiple Real-World DomainsCode1
Self-Supervised Visual Representation Learning with Semantic GroupingCode1
Semi-supervised 3D shape segmentation with multilevel consistency and part substitutionCode1
ELECTRIcity: An Efficient Transformer for Non-Intrusive Load MonitoringCode1
Unsupervised Pre-training for Temporal Action Localization TasksCode1
Reinforcement Learning with Action-Free Pre-Training from VideosCode1
Unsupervised Pre-Training on Patient Population Graphs for Patient-Level PredictionsCode1
Korean-Specific Dataset for Table Question AnsweringCode1
The CLEAR Benchmark: Continual LEArning on Real-World ImageryCode1
AI-Bind: Improving Binding Predictions for Novel Protein Targets and LigandsCode1
SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for Spatial-Aware Visual RepresentationsCode1
Bag of Tricks and A Strong baseline for Image Copy DetectionCode1
D^2LV: A Data-Driven and Local-Verification Approach for Image Copy DetectionCode1
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech RecognitionCode1
RandomRooms: Unsupervised Pre-training from Synthetic Shapes and Randomized Layouts for 3D Object DetectionCode1
PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-trainingCode1
Exploring the Limits of Out-of-Distribution DetectionCode1
Initialization and Regularization of Factorized Neural LayersCode1
Patient Contrastive Learning: a Performant, Expressive, and Practical Approach to ECG ModelingCode1
Pre-training strategies and datasets for facial representation learningCode1
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing DataCode1
Show:102550
← PrevPage 1 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified