SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 150 of 265 papers

TitleStatusHype
Dynamic data sampler for cross-language transfer learning in large language modelsCode7
Qwen3 Embedding: Advancing Text Embedding and Reranking Through Foundation ModelsCode5
DepthSplat: Connecting Gaussian Splatting and DepthCode5
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCICode4
A Survey on Data Selection for Language ModelsCode3
FSFM: A Generalizable Face Security Foundation Model via Self-Supervised Facial Representation LearningCode2
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite ImageryCode2
Large-Scale Pre-training for Person Re-identification with Noisy LabelsCode2
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials DesignCode2
Foundation Policies with Hilbert RepresentationsCode2
Pre-training strategies and datasets for facial representation learningCode1
PEAC: Unsupervised Pre-training for Cross-Embodiment Reinforcement LearningCode1
PersonViT: Large-scale Self-supervised Vision Transformer for Person Re-IdentificationCode1
Pre-training Contextualized World Models with In-the-wild Videos for Reinforcement LearningCode1
OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised LearningCode1
PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-trainingCode1
ProposalContrast: Unsupervised Pre-training for LiDAR-based 3D Object DetectionCode1
METRA: Scalable Unsupervised RL with Metric-Aware AbstractionCode1
ELECTRIcity: An Efficient Transformer for Non-Intrusive Load MonitoringCode1
Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage RetrievalCode1
MAGNeto: An Efficient Deep Learning Method for the Extractive Tags Summarization ProblemCode1
Exact solutions to the nonlinear dynamics of learning in deep linear neural networksCode1
ConStyle v2: A Strong Prompter for All-in-One Image RestorationCode1
FreePoint: Unsupervised Point Cloud Instance SegmentationCode1
Patient Contrastive Learning: a Performant, Expressive, and Practical Approach to ECG ModelingCode1
Bag of Tricks and A Strong baseline for Image Copy DetectionCode1
Behavior From the Void: Unsupervised Active Pre-TrainingCode1
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech RecognitionCode1
HIQL: Offline Goal-Conditioned RL with Latent States as ActionsCode1
PointContrast: Unsupervised Pre-training for 3D Point Cloud UnderstandingCode1
Leveraging Pre-trained Checkpoints for Sequence Generation TasksCode1
Multilingual Constituency Parsing with Self-Attention and Pre-TrainingCode1
CARLANE: A Lane Detection Benchmark for Unsupervised Domain Adaptation from Simulation to multiple Real-World DomainsCode1
Initialization and Regularization of Factorized Neural LayersCode1
Korean-Specific Dataset for Table Question AnsweringCode1
D^2LV: A Data-Driven and Local-Verification Approach for Image Copy DetectionCode1
FedAUX: Leveraging Unlabeled Auxiliary Data in Federated LearningCode1
CATE: Computation-aware Neural Architecture Encoding with TransformersCode1
DocILE Benchmark for Document Information Localization and ExtractionCode1
Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful LearnerCode1
A Transformer-based Framework for Multivariate Time Series Representation LearningCode1
Image Representations Learned With Unsupervised Pre-Training Contain Human-like BiasesCode1
A Further Study of Unsupervised Pre-training for Transformer Based Speech RecognitionCode1
Exploring the Limits of Out-of-Distribution DetectionCode1
End-to-End Training of Neural Retrievers for Open-Domain Question AnsweringCode1
ES-Net: An Efficient Stereo Matching NetworkCode1
BMRetriever: Tuning Large Language Models as Better Biomedical Text RetrieversCode1
AI-Bind: Improving Binding Predictions for Novel Protein Targets and LigandsCode1
DOBF: A Deobfuscation Pre-Training Objective for Programming LanguagesCode1
Improving Transformer-based Speech Recognition Using Unsupervised Pre-trainingCode1
Show:102550
← PrevPage 1 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified