SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 926950 of 10307 papers

TitleStatusHype
AraT5: Text-to-Text Transformers for Arabic Language GenerationCode1
Breast Cancer Diagnosis in Two-View Mammography Using End-to-End Trained EfficientNet-Based Convolutional NetworkCode1
A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained ClassificationCode1
Meta-Learning in Neural Networks: A SurveyCode1
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesCode1
Boosting Weakly Supervised Object Detection with Progressive Knowledge TransferCode1
Meta-Transfer Learning for Low-Resource Abstractive SummarizationCode1
Meta-Transfer Learning through Hard TasksCode1
MetaXL: Meta Representation Transformation for Low-resource Cross-lingual LearningCode1
MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and ResolutionCode1
AReLU: Attention-based Rectified Linear UnitCode1
Boosting Weakly Supervised Object Detection via Learning Bounding Box AdjustersCode1
Mini but Mighty: Finetuning ViTs with Mini AdaptersCode1
Active Transfer Learning for Efficient Video-Specific Human Pose EstimationCode1
MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for RecommendationCode1
Mixed formulation of physics-informed neural networks for thermo-mechanically coupled systems and heterogeneous domainsCode1
Mixed Information Flow for Cross-domain Sequential RecommendationsCode1
Bridge Correlational Neural Networks for Multilingual Multimodal Representation LearningCode1
Blindly Assess Quality of In-the-Wild Videos via Quality-aware Pre-training and Motion PerceptionCode1
ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face SynthesisCode1
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No QuestionsCode1
MMTL-UniAD: A Unified Framework for Multimodal and Multi-Task Learning in Assistive Driving PerceptionCode1
MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray ModelsCode1
Model-Based Reinforcement Learning with Isolated ImaginationsCode1
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural NetworksCode1
Show:102550
← PrevPage 38 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified