SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 20262050 of 10307 papers

TitleStatusHype
A Physics-driven GraphSAGE Method for Physical Process Simulations Described by Partial Differential Equations0
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated ExpertsCode1
HOLMES: HOLonym-MEronym based Semantic inspection for Convolutional Image ClassifiersCode0
Cross-user activity recognition via temporal relation optimal transport0
Conditional computation in neural networks: principles and research trends0
Low-Energy On-Device Personalization for MCUsCode0
Authorship Style Transfer with Policy OptimizationCode1
Enhancing Transfer Learning with Flexible Nonparametric Posterior Sampling0
Fine-grained Prompt Tuning: A Parameter and Memory Efficient Transfer Learning Method for High-resolution Medical Image ClassificationCode1
Discovering High-Strength Alloys via Physics-Transfer Learning0
Knowledge Transfer across Multiple Principal Component Analysis Studies0
DALSA: Domain Adaptation for Supervised Learning From Sparsely Annotated MR Images0
LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations0
A Segmentation Foundation Model for Diverse-type Tumors0
Cross-domain and Cross-dimension Learning for Image-to-Graph TransformersCode0
Forest Inspection Dataset for Aerial Semantic Segmentation and Depth Estimation0
Can LLMs' Tuning Methods Work in Medical Multimodal Domain?Code1
Pre-Trained Model Recommendation for Downstream Fine-tuning0
Exploring Large Language Models and Hierarchical Frameworks for Classification of Large Unstructured Legal DocumentsCode0
Large Language Models on Fine-grained Emotion Detection Dataset with Data Augmentation and Transfer Learning0
Towards In-Vehicle Multi-Task Facial Attribute Recognition: Investigating Synthetic Data and Vision Foundation Models0
Frequency Attention for Knowledge DistillationCode1
Multimodal deep learning approach to predicting neurological recovery from coma after cardiac arrest0
OmniJet-α: The first cross-task foundation model for particle physicsCode1
RadarDistill: Boosting Radar-based Object Detection Performance via Knowledge Distillation from LiDAR FeaturesCode1
Show:102550
← PrevPage 82 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified