SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 20512075 of 10307 papers

TitleStatusHype
JointMotion: Joint Self-Supervision for Joint Motion Prediction0
Hybridized Convolutional Neural Networks and Long Short-Term Memory for Improved Alzheimer's Disease Diagnosis from MRI Scans0
Authorship Attribution in Bangla Literature (AABL) via Transfer Learning using ULMFiT0
Towards generalization of drug response prediction to single cells and patients utilizing importance-aware multi-source domain transfer learningCode0
HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context InteractionCode2
Synthetic data generation for system identification: leveraging knowledge transfer from similar systemsCode0
Physics-informed and Unsupervised Riemannian Domain Adaptation for Machine Learning on Heterogeneous EEG Datasets0
Cell reprogramming design by transfer learning of functional transcriptional networksCode0
Source Matters: Source Dataset Impact on Model Robustness in Medical ImagingCode0
DA-Net: A Disentangled and Adaptive Network for Multi-Source Cross-Lingual Transfer Learning0
AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit DetectorsCode2
Temporal Relations of Informative Frames in Action RecognitionCode0
A Privacy-Preserving Framework with Multi-Modal Data for Cross-Domain Recommendation0
On Transfer in Classification: How Well do Subsets of Classes Generalize?0
Self and Mixed Supervision to Improve Training Labels for Multi-Class Medical Image Segmentation0
LEAD: Learning Decomposition for Source-free Universal Domain AdaptationCode1
Multi-modal Deep Learning0
Neural Architecture Search using Particle Swarm and Ant Colony Optimization0
Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation0
Zero-Shot Cross-Lingual Document-Level Event Causality Identification with Heterogeneous Graph Contrastive Transfer Learning0
PalmProbNet: A Probabilistic Approach to Understanding Palm Distributions in Ecuadorian Tropical Forest via Transfer Learning0
A Unified Framework for Microscopy Defocus Deblur with Multi-Pyramid Transformer and Contrastive LearningCode1
TPLLM: A Traffic Prediction Framework Based on Pretrained Large Language Models0
How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers0
Distilled ChatGPT Topic & Sentiment Modeling with Applications in Finance0
Show:102550
← PrevPage 83 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified