SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 92519275 of 10307 papers

TitleStatusHype
Cross-Lingual Argumentative Relation Identification: from English to PortugueseCode0
DERE: A Task and Domain-Independent Slot Filling Framework for Declarative Relation Extraction0
Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy0
Low-resource named entity recognition via multi-source projection: Not quite there yet?0
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis0
Don't forget, there is more than forgetting: new metrics for Continual Learning0
Cross-Lingual Transfer Learning for Multilingual Task Oriented Dialog0
Gated Transfer Network for Transfer Learning0
Adaptive Transfer Learning in Deep Neural Networks: Wind Power Prediction using Knowledge Transfer from Region to Region and Between Different Task Domains0
Semi-unsupervised Learning of Human Activity using Deep Generative ModelsCode0
Speaking style adaptation in Text-To-Speech synthesis using Sequence-to-sequence models with attention0
Cross-Modal Distillation for RGB-Depth Person Re-Identification0
Accumulating Knowledge for Lifelong Online Learning0
Finding Answers from the Word of God: Domain Adaptation for Neural Networks in Biblical Question Answering0
Transfer of Deep Reactive Policies for MDP PlanningCode0
K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning0
Improving Document Binarization via Adversarial Noise-Texture AugmentationCode0
Universal Language Model Fine-Tuning with Subword Tokenization for PolishCode0
Training neural audio classifiers with few dataCode0
Testing the Generalization Power of Neural Network Models Across NLI Benchmarks0
Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data0
Boosting pathology detection in infants by deep transfer learning from adult speech0
How transferable are features in convolutional neural network acoustic models across languages?0
Mechanisms for Integrated Feature Normalization and Remaining Useful Life Estimation Using LSTMs Applied to Hard-Disks0
Machine Learning Methods for Track Classification in the AT-TPCCode0
Show:102550
← PrevPage 371 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified