SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 11511175 of 10307 papers

TitleStatusHype
Byakto Speech: Real-time long speech synthesis with convolutional neural network: Transfer learning from English to BanglaCode1
A unified framework for dataset shift diagnosticsCode1
Determining Chess Game State From an ImageCode1
Developing a Named Entity Recognition Dataset for TagalogCode1
A Study of Face Obfuscation in ImageNetCode1
Differencing based Self-supervised pretraining for Scene Change DetectionCode1
Diffusion-Based Neural Network Weights GenerationCode1
Diffusion Model as Representation LearnerCode1
Discriminative Feature Alignment: Improving Transferability of Unsupervised Domain Adaptation by Gaussian-guided Latent AlignmentCode1
Disentangled Knowledge Transfer for OOD Intent Discovery with Unified Contrastive LearningCode1
A Chinese Corpus for Fine-grained Entity TypingCode1
Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer LearningCode1
Distilling Knowledge from Graph Convolutional NetworksCode1
Distillation from Heterogeneous Models for Top-K RecommendationCode1
Distilling BlackBox to Interpretable models for Efficient Transfer LearningCode1
Distilling Image Classifiers in Object DetectorsCode1
Document AI: A Comparative Study of Transformer-Based, Graph-Based Models, and Convolutional Neural Networks For Document Layout AnalysisCode1
DocXClassifier: High Performance Explainable Deep Network for Document Image ClassificationCode1
The Surprising Positive Knowledge Transfer in Continual 3D Object Shape ReconstructionCode1
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft PromptsCode1
Masking meets Supervision: A Strong Learning AllianceCode1
Domain Adaptation for Time Series Under Feature and Label ShiftsCode1
Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree SearchCode1
Attention-Based Deep Learning Framework for Human Activity Recognition with User AdaptationCode1
BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark for Short-Term Load ForecastingCode1
Show:102550
← PrevPage 47 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified