SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 776800 of 10307 papers

TitleStatusHype
The Master Key Filters Hypothesis: Deep Filters Are General0
Monkey Transfer Learning Can Improve Human Pose Estimation0
Enhancing Generalized Few-Shot Semantic Segmentation via Effective Knowledge TransferCode0
Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network0
SeagrassFinder: Deep Learning for Eelgrass Detection and Coverage Estimation in the Wild0
The First Multilingual Model For The Detection of Suicide Texts0
Self-Evolution Knowledge Distillation for LLM-based Machine Translation0
A Multi-Fidelity Graph U-Net Model for Accelerated Physics Simulations0
Color Enhancement for V-PCC Compressed Point Cloud via 2D Attribute Map Optimization0
RefHCM: A Unified Model for Referring Perceptions in Human-Centric ScenariosCode0
Knowledge Distillation in RNN-Attention Models for Early Prediction of Student PerformanceCode0
SCKD: Semi-Supervised Cross-Modality Knowledge Distillation for 4D Radar Object DetectionCode0
Enhancing Knowledge Distillation for LLMs with Response-Priming PromptingCode0
Bridging the User-side Knowledge Gap in Knowledge-aware Recommendations with Large Language ModelsCode1
Language verY Rare for All0
Trustworthy Transfer Learning: A Survey0
FlexPose: Pose Distribution Adaptation with Limited Guidance0
On Explaining Knowledge Distillation: Measuring and Visualising the Knowledge Transfer Process0
Understanding and Analyzing Model Robustness and Knowledge-Transfer in Multilingual Neural Machine Translation using TX-Ray0
In-Context Learning Distillation for Efficient Few-Shot Fine-Tuning0
Extending LLMs to New Languages: A Case Study of Llama and Persian AdaptationCode0
Deep Speech Synthesis from Multimodal Articulatory Representations0
Multi-Task Reinforcement Learning for Quadrotors0
A3E: Aligned and Augmented Adversarial Ensemble for Accurate, Robust and Privacy-Preserving EEG Decoding0
CiTrus: Squeezing Extra Performance out of Low-data Bio-signal Transfer Learning0
Show:102550
← PrevPage 32 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified