SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 80018025 of 10307 papers

TitleStatusHype
AI based Safety System for Employees of Manufacturing Industries in Developing Countries0
Multi-task Optimization Based Co-training for Electricity Consumption Prediction0
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning0
Multi-Task Pseudo-Label Learning for Non-Intrusive Speech Quality Assessment Model0
Multi-Task Regularization with Covariance Dictionary for Linear Classifiers0
Multi-Task Reinforcement Learning for Quadrotors0
AI-Based Energy Transportation Safety: Pipeline Radial Threat Estimation Using Intelligent Sensing System0
Multi-task Representation Learning with Stochastic Linear Bandits0
Multi-task Self-Supervised Learning for Human Activity Detection0
Multi-Task Self-Supervised Time-Series Representation Learning0
Speeding Up EfficientNet: Selecting Update Blocks of Convolutional Neural Networks using Genetic Algorithm in Transfer Learning0
Multi-Task Supervised Pretraining for Neural Domain Adaptation0
Multi-task transfer learning for finding actionable information from crisis-related messages on social media0
Multi-Transfer Learning Techniques for Detecting Auditory Brainstem Response0
AI-Assisted Decision-Making for Clinical Assessment of Auto-Segmented Contour Quality0
Multivariate and Online Transfer Learning with Uncertainty Quantification0
Multi-view and Multi-source Transfers in Neural Topic Modeling with Pretrained Topic and Word Embeddings0
Multiview Contrastive Learning for Unsupervised Domain Adaptation in Brain–Computer Interfaces0
Multi-View Cross-Lingual Structured Prediction with Minimum Supervision0
SpeGCL: Self-supervised Graph Spectrum Contrastive Learning without Positive Samples0
Multi-View Priors for Learning Detectors from Sparse Viewpoint Data0
AI Approaches in Processing and Using Data in Personalized Medicine0
Multi-way VNMT for UGC: Improving Robustness and Capacity via Mixture Density Networks0
MuMIC -- Multimodal Embedding for Multi-label Image Classification with Tempered Sigmoid0
muNet: Evolving Pretrained Deep Neural Networks into Scalable Auto-tuning Multitask Systems0
Show:102550
← PrevPage 321 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified