SOTAVerified

Unsupervised Pre-training

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Papers

Showing 51100 of 265 papers

TitleStatusHype
Behavior From the Void: Unsupervised Active Pre-TrainingCode1
ES-Net: An Efficient Stereo Matching NetworkCode1
DOBF: A Deobfuscation Pre-Training Objective for Programming LanguagesCode1
CATE: Computation-aware Neural Architecture Encoding with TransformersCode1
Unsupervised Semantic Segmentation by Contrasting Object Mask ProposalsCode1
FedAUX: Leveraging Unlabeled Auxiliary Data in Federated LearningCode1
SEED: Self-supervised Distillation For Visual RepresentationCode1
End-to-End Training of Neural Retrievers for Open-Domain Question AnsweringCode1
OBoW: Online Bag-of-Visual-Words Generation for Self-Supervised LearningCode1
TabTransformer: Tabular Data Modeling Using Contextual EmbeddingsCode1
Unsupervised Pre-training for Person Re-identificationCode1
UP-DETR: Unsupervised Pre-training for Object Detection with TransformersCode1
MAGNeto: An Efficient Deep Learning Method for the Extractive Tags Summarization ProblemCode1
Image Representations Learned With Unsupervised Pre-Training Contain Human-like BiasesCode1
Unsupervised Vision-and-Language Pre-training Without Parallel Images and CaptionsCode1
A Transformer-based Framework for Multivariate Time Series Representation LearningCode1
Self-training Improves Pre-training for Natural Language UnderstandingCode1
Spatiotemporal Contrastive Video Representation LearningCode1
SeCo: Exploring Sequence Supervision for Unsupervised Representation LearningCode1
PointContrast: Unsupervised Pre-training for 3D Point Cloud UnderstandingCode1
A Further Study of Unsupervised Pre-training for Transformer Based Speech RecognitionCode1
Rolling-Unrolling LSTMs for Action Anticipation from First-Person VideoCode1
TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction TaskCode1
Improving Transformer-based Speech Recognition Using Unsupervised Pre-trainingCode1
Leveraging Pre-trained Checkpoints for Sequence Generation TasksCode1
wav2vec: Unsupervised Pre-training for Speech RecognitionCode1
Multilingual Constituency Parsing with Self-Attention and Pre-TrainingCode1
Exact solutions to the nonlinear dynamics of learning in deep linear neural networksCode1
Foundation Model for Wireless Technology Recognition Using IQ Timeseries0
Is MixIT Really Unsuitable for Correlated Sources? Exploring MixIT for Unsupervised Pre-training in Music Source Separation0
The Efficiency of Pre-training with Objective Masking in Pseudo Labeling for Semi-Supervised Text Classification0
Latte: Transfering LLMs` Latent-level Knowledge for Few-shot Tabular Learning0
Risk Assessment Framework for Code LLMs via Leveraging Internal States0
ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object SegmentationCode0
ZS-VCOS: Zero-Shot Outperforms Supervised Video Camouflaged Object Segmentation with Zero-Shot MethodCode0
How much do LLMs learn from negative examples?Code0
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation0
Provable Benefits of Unsupervised Pre-training and Transfer Learning via Single-Index Models0
Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding0
Targeted Forgetting of Image Subgroups in CLIP Models0
HindiLLM: Large Language Model for Hindi0
Transferring self-supervised pre-trained models for SHM data anomaly detection with scarce labeled data0
CLAP: Unsupervised 3D Representation Learning for Fusion 3D Perception via Curvature Sampling and Prototype Learning0
Point Cloud Unsupervised Pre-training via 3D Gaussian Splatting0
Take Package as Language: Anomaly Detection Using TransformerCode0
SynCo: Synthetic Hard Negatives in Contrastive Learning for Better Unsupervised Visual RepresentationsCode0
Calibrating Language Models with Adaptive Temperature ScalingCode0
Range-aware Positional Encoding via High-order Pretraining: Theory and Practice0
GFlowNet Pretraining with Inexpensive Rewards0
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training0
Show:102550
← PrevPage 2 of 6Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
115RDLsAccuracy (%)95Unverified
29RDLsAccuracy (%)94Unverified
33 RMDLAccuracy (%)93Unverified
4CNNAccuracy (%)73Unverified
5RMDLAccuracy (%)0.1Unverified
#ModelMetricClaimedVerifiedStatus
1RMDL (30 RDLs)Sensitivity (VEB)90.69Unverified
2Sensitivity89.1Unverified
3RMDL 3 RDLsSensitivity0.87Unverified