SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 601625 of 10307 papers

TitleStatusHype
Transfer Learning of Surrogate Models: Integrating Domain Warping and Affine Transformations0
Distilling Knowledge for Designing Computational Imaging SystemsCode0
Digital Twin Synchronization: Bridging the Sim-RL Agent to a Real-Time Robotic Additive Manufacturing Control0
Action Recognition Using Temporal Shift Module and Ensemble LearningCode0
LEKA:LLM-Enhanced Knowledge Augmentation0
Fundamental Computational Limits in Pursuing Invariant Causal Prediction and Invariance-Guided Regularization0
Stiff Transfer Learning for Physics-Informed Neural Networks0
Multimodal Magic Elevating Depression Detection with a Fusion of Text and Audio Intelligence0
Molecular-driven Foundation Model for Oncologic PathologyCode4
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models0
CoRe-Net: Co-Operational Regressor Network with Progressive Transfer Learning for Blind Radar Signal RestorationCode0
Automatic Machine Learning Framework to Study Morphological Parameters of AGN Host Galaxies within z < 1.4 in the Hyper Supreme-Cam Wide SurveyCode0
Transfer of Knowledge through Reverse Annealing: A Preliminary Analysis of the Benefits and What to Share0
MM-Retinal V2: Transfer an Elite Knowledge Spark into Fundus Vision-Language PretrainingCode2
Variational Bayesian Adaptive Learning of Deep Latent Variables for Acoustic Knowledge Transfer0
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from TransformerCode1
Expert-Free Online Transfer Learning in Multi-Agent Reinforcement LearningCode0
Universal Image Restoration Pre-training via Degradation ClassificationCode2
A Transfer Learning Framework for Anomaly Detection in Multivariate IoT Traffic Data0
Building Efficient Lightweight CNN Models0
Cross-Modal Transfer from Memes to Videos: Addressing Data Scarcity in Hateful Video DetectionCode0
Uni-Sign: Toward Unified Sign Language Understanding at ScaleCode2
In-Context Operator Learning for Linear Propagator Models0
Explainable YOLO-Based Dyslexia Detection in Synthetic Handwriting Data0
Neuronal and structural differentiation in the emergence of abstract rules in hierarchically modulated spiking neural networks0
Show:102550
← PrevPage 25 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified