SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 80518075 of 10307 papers

TitleStatusHype
What Matters for Neural Cross-Lingual Named Entity Recognition: An Empirical Analysis0
What matters in a transferable neural network model for relation classification in the biomedical domain?0
What's Mine is Yours: Pretrained CNNs for Limited Training Sonar ATR0
What Synthesis is Missing: Depth Adaptation Integrated with Weak Supervision for Indoor Scene Parsing0
What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State0
A Two-Stage Federated Transfer Learning Framework in Medical Images Classification on Limited Data: A COVID-19 Case Study0
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey0
When does a bridge become an aeroplane?0
When does CLIP generalize better than unimodal models? When judging human-centric concepts0
When does Parameter-Efficient Transfer Learning Work for Machine Translation?0
When Invariant Representation Learning Meets Label Shift: Insufficiency and Theoretical Insights0
When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration0
When More is not Necessary Better: Multilingual Auxiliary Tasks for Zero-Shot Cross-Lingual Transfer of Hate Speech Detection Models0
When Semi-Supervised Learning Meets Transfer Learning: Training Strategies, Models and Datasets0
When Video Classification Meets Incremental Classes0
Where Should I Spend My FLOPS? Efficiency Evaluations of Visual Pre-training Methods0
Which Model to Transfer? A Survey on Transferability Estimation0
Which Model to Transfer? Finding the Needle in the Growing Haystack0
Multimodal Magic Elevating Depression Detection with a Fusion of Text and Audio Intelligence0
Whit’s the Richt Pairt o Speech: PoS tagging for Scots0
WHO-Hand Hygiene Gesture Classification System0
Who Writes the Review, Human or AI?0
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference0
Why Can You Lay Off Heads? Investigating How BERT Heads Transfer0
Why Is Public Pretraining Necessary for Private Model Training?0
Show:102550
← PrevPage 323 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified