SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 751775 of 10307 papers

TitleStatusHype
ProQA: Structural Prompt-based Pre-training for Unified Question AnsweringCode1
On the Use of BERT for Automated Essay Scoring: Joint Learning of Multi-Scale Essay RepresentationCode1
Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attentionCode1
MTTrans: Cross-Domain Object Detection with Mean-Teacher TransformerCode1
Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 LanguagesCode1
Disentangled Knowledge Transfer for OOD Intent Discovery with Unified Contrastive LearningCode1
Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGANCode1
Predicting Cellular Responses to Novel Drug Perturbations at a Single-Cell ResolutionCode1
Hierarchical Bayesian Modelling for Knowledge Transfer Across Engineering Fleets via Multitask LearningCode1
Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic SegmentationCode1
Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask TrainingCode1
Smart App Attack: Hacking Deep Learning Models in Android AppsCode1
A benchmark dataset for deep learning-based airplane detection: HRPlanesCode1
Progressive Training of A Two-Stage Framework for Video RestorationCode1
Self-supervised Learning for Sonar Image ClassificationCode1
Deep transfer operator learning for partial differential equations under conditional shiftCode1
Safe Self-Refinement for Transformer-based Domain AdaptationCode1
Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a DifferenceCode1
DeiT III: Revenge of the ViTCode1
Accurate Clinical Toxicity Prediction using Multi-task Deep Neural Nets and Contrastive Molecular ExplanationsCode1
Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-TuningCode1
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic DataCode1
mulEEG: A Multi-View Representation Learning on EEG SignalsCode1
L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic SegmentationCode1
Unsupervised Prompt Learning for Vision-Language ModelsCode1
Show:102550
← PrevPage 31 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified