SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 10011025 of 10307 papers

TitleStatusHype
Do Adversarially Robust ImageNet Models Transfer Better?Code1
A Simple Baseline for Bayesian Uncertainty in Deep LearningCode1
A simple, efficient and scalable contrastive masked autoencoder for learning visual representationsCode1
A Simple Language Model for Task-Oriented DialogueCode1
A Simple Multi-Modality Transfer Learning Baseline for Sign Language TranslationCode1
A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment AnalysisCode1
Uncovering the Connections Between Adversarial Transferability and Knowledge TransferabilityCode1
A single-cell gene expression language modelCode1
The Surprising Positive Knowledge Transfer in Continual 3D Object Shape ReconstructionCode1
On Transferability of Prompt Tuning for Natural Language ProcessingCode1
Efficient Adaptation of Large Vision Transformer via Adapter Re-ComposingCode1
Attention-Based Deep Learning Framework for Human Activity Recognition with User AdaptationCode1
A General-Purpose Self-Supervised Model for Computational PathologyCode1
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size ScheduleCode1
AttentionHTR: Handwritten Text Recognition Based on Attention Encoder-Decoder NetworksCode1
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic AnchorsCode1
Domain Adaptation of Thai Word Segmentation Models using Stacked EnsembleCode1
Domain Adaptation for Time Series Under Feature and Label ShiftsCode1
OW-DETR: Open-world Detection TransformerCode1
Assemble Foundation Models for Automatic Code SummarizationCode1
Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?Code1
Paced-Curriculum Distillation with Prediction and Label Uncertainty for Image SegmentationCode1
Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest ImagesCode1
Efficient Conditional GAN Transfer with Knowledge Propagation across ClassesCode1
EEG-Reptile: An Automatized Reptile-Based Meta-Learning Library for BCIsCode1
Show:102550
← PrevPage 41 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified