SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 151175 of 10307 papers

TitleStatusHype
Domain Adaptation of VLM for Soccer Video Understanding0
Transfer Learning from Visual Speech Recognition to Mouthing Recognition in German Sign LanguageCode0
Neural Incompatibility: The Unbridgeable Gap of Cross-Scale Parametric Knowledge Transfer in Large Language ModelsCode1
Dual Decomposition of Weights and Singular Value Low Rank Adaptation0
DRP: Distilled Reasoning Pruning with Skill-aware Step Decomposition for Efficient Large Reasoning Models0
Contrastive Consolidation of Top-Down Modulations Achieves Sparsely Supervised Continual Learning0
Data-Efficient Hate Speech Detection via Cross-Lingual Nearest Neighbor Retrieval with Limited Labeled Data0
Bi-level Unbalanced Optimal Transport for Partial Domain Adaptation0
HR-VILAGE-3K3M: A Human Respiratory Viral Immunization Longitudinal Gene Expression Dataset for Systems ImmunityCode0
A Hybrid Quantum Classical Pipeline for X Ray Based Fracture Diagnosis0
Understanding Cross-Lingual Inconsistency in Large Language Models0
Towards A Generalist Code Embedding Model Based On Massive Data Synthesis0
On the Mechanisms of Adversarial Data Augmentation for Robust and Adaptive Transfer Learning0
Mamba-Adaptor: State Space Model Adaptor for Visual Recognition0
A Token is Worth over 1,000 Tokens: Efficient Knowledge Distillation through Low-Rank CloneCode1
Adaptive Image Restoration for Video Surveillance: A Real-Time Approach0
Cross-modal Knowledge Transfer Learning as Graph Matching Based on Optimal Transport for ASR0
InnateCoder: Learning Programmatic Options with Foundation ModelsCode0
Efficient Federated Class-Incremental Learning of Pre-Trained Models via Task-agnostic Low-rank Residual Adaptation0
CLIP-aware Domain-Adaptive Super-Resolution0
Relation-Aware Graph Foundation Model0
Residual Feature Integration is Sufficient to Prevent Negative TransferCode0
FiGKD: Fine-Grained Knowledge Distillation via High-Frequency Detail Transfer0
Programmable metasurfaces for future photonic artificial intelligence0
Humble your Overconfident Networks: Unlearning Overfitting via Sequential Monte Carlo Tempered Deep Ensembles0
Show:102550
← PrevPage 7 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified