SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 29012925 of 10307 papers

TitleStatusHype
Exploring the Limits of Weakly Supervised PretrainingCode0
Exploring the Robustness of Task-oriented Dialogue Systems for Colloquial German VarietiesCode0
Cross-Dimension Affinity Distillation for 3D EM Neuron SegmentationCode0
Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog ModelCode0
Detection and classification of vocal productions in large scale audio recordingsCode0
Detecting Insincere Questions from Text: A Transfer Learning ApproachCode0
ATL: Autonomous Knowledge Transfer from Many Streaming ProcessesCode0
Cross-dataset COVID-19 Transfer Learning with Cough Detection, Cough Segmentation, and Data AugmentationCode0
Exploring the Effectiveness and Consistency of Task Selection in Intermediate-Task Transfer LearningCode0
Exploring Target Representations for Masked AutoencodersCode0
Leveraging Cross-Lingual Transfer Learning in Spoken Named Entity Recognition SystemsCode0
Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table TransformersCode0
Exploring Pre-Trained Transformers and Bilingual Transfer Learning for Arabic Coreference ResolutionCode0
Detecting Urgency Status of Crisis Tweets: A Transfer Learning Approach for Low Resource LanguagesCode0
Cross-corpus Readability Compatibility Assessment for English TextsCode0
Multilingual Content Moderation: A Case Study on RedditCode0
Exploring Self-Supervised Representation Learning For Low-Resource Medical Image AnalysisCode0
Multilingual is not enough: BERT for FinnishCode0
Exploring the Benefits of Visual Prompting in Differential PrivacyCode0
Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential RecommendationCode0
Beyond English: Evaluating Automated Measurement of Moral Foundations in Non-English Discourse with a Chinese Case StudyCode0
Multilingual Offensive Language Identification with Cross-lingual EmbeddingsCode0
Cross-Context Backdoor Attacks against Graph Prompt LearningCode0
Multilingual transfer of acoustic word embeddings improves when training on languages related to the target zero-resource languageCode0
Exploring Model Transferability through the Lens of Potential EnergyCode0
Show:102550
← PrevPage 117 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified