SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 66516700 of 10307 papers

TitleStatusHype
Joint Photo Stream and Blog Post Summarization and Exploration0
Joint Pilot Design and Channel Estimation using Deep Residual Learning for Multi-Cell Massive MIMO under Hardware Impairments0
Joint PMD Tracking and Nonlinearity Compensation with Deep Neural Networks0
Joint prediction of truecasing and punctuation for conversational speech in low-resource scenarios0
Joint Recognition of Handwritten Text and Named Entities with a Neural End-to-end Model0
Joint Semantic Transfer Network for IoT Intrusion Detection0
Joint Similarity Item Exploration and Overlapped User Guidance for Multi-Modal Cross-Domain Recommendation0
Joint Supervised and Self-Supervised Learning for 3D Real-World Challenges0
Joint Unsupervised and Supervised Training for Multilingual ASR0
Jump Diffusion-Informed Neural Networks with Transfer Learning for Accurate American Option Pricing under Data Scarcity0
Jumpstarting Surgical Computer Vision0
jurBERT: A Romanian BERT Model for Legal Judgement Prediction0
JUST-BLUE at SemEval-2021 Task 1: Predicting Lexical Complexity using BERT and RoBERTa Pre-trained Language Models0
Just rotate it! Uncertainty estimation in closed-source models via multiple queries0
JutePestDetect: An Intelligent Approach for Jute Pest Identification Using Fine-Tuned Transfer Learning0
KANsformer for Scalable Beamforming0
Balanced End-to-End Monolingual pre-training for Low-Resourced Indic Languages Code-Switching Speech Recognition0
KATO: Knowledge Alignment and Transfer for Transistor Sizing of Different Design and Technology0
KDDIE at SemEval-2022 Task 11: Using DeBERTa for Named Entity Recognition0
Keep Learning: Self-supervised Meta-learning for Learning from Inference0
KEIS@JUST at SemEval-2020 Task 12: Identifying Multilingual Offensive Tweets Using Weighted Ensemble and Fine-Tuned BERT0
Keratoconus Classifier for Smartphone-based Corneal Topographer0
Kernel Alignment for Unsupervised Transfer Learning0
Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks0
Accelerated Bayesian Optimization throughWeight-Prior Tuning0
Key ingredients for effective zero-shot cross-lingual knowledge transfer in generative tasks0
K for the Price of 1: Parameter-efficient Multi-task and Transfer Learning0
K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning0
A Two-Step Deep Learning Method for 3DCT-2DUS Kidney Registration During Breathing0
KITE: A Kernel-based Improved Transferability Estimation Method0
KIT-Multi: A Translation-Oriented Multilingual Embedding Corpus0
kk2018 at SemEval-2020 Task 9: Adversarial Training for Code-Mixing Sentiment Classification0
KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot Node Classification0
k-Nearest Neighbor Augmented Neural Networks for Text Classification0
Knee menisci segmentation and relaxometry of 3D ultrashort echo time (UTE) cones MR imaging using attention U-Net with transfer learning0
k-NN as a Simple and Effective Estimator of Transferability0
Knowledge as A Bridge: Improving Cross-domain Answer Selection with External Knowledge0
Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models0
Knowledge-Based Learning through Feature Generation0
Knowledge capture, adaptation and composition (KCAC): A framework for cross-task curriculum learning in robotic manipulation0
Knowledge Distillation Label Smoothing: Fact or Fallacy?0
Knowledge Distillation Based Semantic Communications For Multiple Users0
Knowledge Distillation of Black-Box Large Language Models0
Knowledge Distillation in Federated Learning: a Survey on Long Lasting Challenges and New Solutions0
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher0
Knowledge Distillation Under Ideal Joint Classifier Assumption0
Knowledge Distillation via Token-level Relationship Graph0
Knowledge distillation with error-correcting transfer learning for wind power prediction0
Multimodal Feature Fusion and Knowledge-Driven Learning via Experts Consult for Thyroid Nodule Classification0
Knowledge Efficient Deep Learning for Natural Language Processing0
Show:102550
← PrevPage 134 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified