SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 71767200 of 10307 papers

TitleStatusHype
Tabular Few-Shot Generalization Across Heterogeneous Feature Spaces0
Building Height Prediction with Instance Segmentation0
Building Inspection Toolkit: Unified Evaluation and Strong Baselines for Damage Recognition0
TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning0
TADFormer: Task-Adaptive Dynamic TransFormer for Efficient Multi-Task Learning0
TAD: Transfer Learning-based Multi-Adversarial Detection of Evasion Attacks against Network Intrusion Detection Systems0
Tag that issue: Applying API-domain labels in issue tracking systems0
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models0
Taiwanese-Accented Mandarin and English Multi-Speaker Talking-Face Synthesis System0
Taking Actions Separately: A Bidirectionally-Adaptive Transfer Learning Method for Low-Resource Neural Machine Translation0
Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection0
Taking it further: leveraging pseudo labels for field delineation across label-scarce smallholder regions0
Talking Models: Distill Pre-trained Knowledge to Downstream Models via Interactive Communication0
Taming "data-hungry" reinforcement learning? Stability in continuous state-action spaces0
Tao: Re-Thinking DL-based Microarchitecture Simulation0
Building medical image classifiers with very limited data using segmentation networks0
TAP: The Attention Patch for Cross-Modal Knowledge Transfer from Unlabeled Modality0
Building Robust Industrial Applicable Object Detection Models Using Transfer Learning and Single Pass Deep Learning Architectures0
Target Aware Network Adaptation for Efficient Representation Learning0
Targeted Attention for Generalized- and Zero-Shot Learning0
Targeting Underrepresented Populations in Precision Medicine: A Federated Transfer Learning Approach0
Target PCA: Transfer Learning Large Dimensional Panel Data0
Target Speaker Lipreading by Audio-Visual Self-Distillation Pretraining and Speaker Adaptation0
Target Transfer Q-Learning and Its Convergence Analysis0
TartuNLP at EvaLatin 2024: Emotion Polarity Detection0
Show:102550
← PrevPage 288 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified