SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 34263450 of 10307 papers

TitleStatusHype
Global Safe Sequential Learning via Efficient Knowledge TransferCode0
Practical Insights into Knowledge Distillation for Pre-Trained Models0
ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance LabelingCode0
Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model0
Simple and Effective Transfer Learning for Neuro-Symbolic Integration0
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors0
Scalable and reliable deep transfer learning for intelligent fault detection via multi-scale neural processes embedded with knowledge0
LinkSAGE: Optimizing Job Matching Using Graph Neural Networks0
Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition0
CST: Calibration Side-Tuning for Parameter and Memory Efficient Transfer Learning0
Cross-Domain Transfer Learning with CoRTe: Consistent and Reliable Transfer from Black-Box to Lightweight Segmentation Model0
Molecule Generation and Optimization for Efficient Fragrance CreationCode0
Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks0
Predicting trucking accidents with truck drivers 'safety climate perception across companies: A transfer learning approach0
Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages0
Induced Model Matching: How Restricted Models Can Help Larger OnesCode0
Stealing the Invisible: Unveiling Pre-Trained CNN Models through Adversarial Examples and Timing Side-Channels0
A synthetic data approach for domain generalization of NLI models0
Mitigating Catastrophic Forgetting in Multi-domain Chinese Spelling Correction by Multi-stage Knowledge Transfer Framework0
Autocorrect for Estonian texts: final report from project EKTB250
A Question Answering Based Pipeline for Comprehensive Chinese EHR Information Extraction0
Differential Private Federated Transfer Learning for Mental Health Monitoring in Everyday Settings: A Case Study on Stress Detection0
Personalised Drug Identifier for Cancer Treatment with Transformers using Auxiliary InformationCode0
Robust agents learn causal world models0
Towards Precision Cardiovascular Analysis in Zebrafish: The ZACAF Paradigm0
Show:102550
← PrevPage 138 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified