SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 12011225 of 10307 papers

TitleStatusHype
FASA: Feature Augmentation and Sampling Adaptation for Long-Tailed Instance SegmentationCode1
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine TranslationCode1
A Study of Face Obfuscation in ImageNetCode1
Audio-based Near-Duplicate Video Retrieval with Audio Similarity LearningCode1
Audio Embeddings as Teachers for Music ClassificationCode1
Audio Spoofing Verification using Deep Convolutional Neural Networks by Transfer LearningCode1
A Chinese Corpus for Fine-grained Entity TypingCode1
Common Voice: A Massively-Multilingual Speech CorpusCode1
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven ConversationCode1
Matrix Information Theory for Self-Supervised LearningCode1
Automatic identification of segmentation errors for radiotherapy using geometric learningCode1
Fashionpedia: Ontology, Segmentation, and an Attribute Localization DatasetCode1
Comparative Evaluation of Pretrained Transfer Learning Models on Automatic Short Answer GradingCode1
KNEEL: Knee Anatomical Landmark Localization Using Hourglass NetworksCode1
A Comparative Study of Existing and New Deep Learning Methods for Detecting Knee Injuries using the MRNet DatasetCode1
Compositional Language Continual LearningCode1
Compressing BERT: Studying the Effects of Weight Pruning on Transfer LearningCode1
Computation-Efficient Knowledge Distillation via Uncertainty-Aware MixupCode1
ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text TranslationCode1
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less DataCode1
Masking meets Supervision: A Strong Learning AllianceCode1
Hierarchical Bayesian Modelling for Knowledge Transfer Across Engineering Fleets via Multitask LearningCode1
Confidence-Aware Multi-Teacher Knowledge DistillationCode1
AUGNLG: Few-shot Natural Language Generation using Self-trained Data AugmentationCode1
Few-Sample Named Entity Recognition for Security Vulnerability Reports by Fine-Tuning Pre-Trained Language ModelsCode1
Show:102550
← PrevPage 49 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified