SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 576600 of 10307 papers

TitleStatusHype
AraT5: Text-to-Text Transformers for Arabic Language GenerationCode1
A Qualitative Evaluation of Language Models on Automatic Question-Answering for COVID-19Code1
A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained ClassificationCode1
AReLU: Attention-based Rectified Linear UnitCode1
Cross-Domain Few-Shot Semantic SegmentationCode1
Cross-Lingual Abstractive Summarization with Limited Parallel ResourcesCode1
Cumulative Spatial Knowledge Distillation for Vision TransformersCode1
Alice: Proactive Learning with Teacher's Demonstrations for Weak-to-Strong GeneralizationCode1
CytoImageNet: A large-scale pretraining dataset for bioimage transfer learningCode1
DAF:re: A Challenging, Crowd-Sourced, Large-Scale, Long-Tailed Dataset For Anime Character RecognitionCode1
Neural Architecture Search using Deep Neural Networks and Monte Carlo Tree SearchCode1
ArMATH: a Dataset for Solving Arabic Math Word ProblemsCode1
ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face SynthesisCode1
Data Efficient Child-Adult Speaker Diarization with Simulated ConversationsCode1
Data-Free Model ExtractionCode1
3D Point Cloud Registration with Multi-Scale Architecture and Unsupervised Transfer LearningCode1
Amalgamating Knowledge From Heterogeneous Graph Neural NetworksCode1
A Simple Baseline for Bayesian Uncertainty in Deep LearningCode1
aschern at SemEval-2020 Task 11: It Takes Three to Tango: RoBERTa, CRF, and Transfer LearningCode1
A Simple and Effective Approach to Automatic Post-Editing with Transfer LearningCode1
Deep comparisons of Neural Networks from the EEGNet familyCode1
A Simple and Effective Approach to Automatic Post-Editing with Transfer LearningCode1
A Simple and Robust Framework for Cross-Modality Medical Image Segmentation applied to Vision TransformersCode1
A simple, efficient and scalable contrastive masked autoencoder for learning visual representationsCode1
CrAM: A Compression-Aware MinimizerCode1
Show:102550
← PrevPage 24 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified