SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 5175 of 10307 papers

TitleStatusHype
CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive SurveyCode2
TransST: Transfer Learning Embedded Spatial Factor Modeling of Spatial Transcriptomics DataCode2
A Survey on Remote Sensing Foundation Models: From Vision to MultimodalityCode2
Teaching LMMs for Image Quality Scoring and InterpretingCode2
MMRL: Multi-Modal Representation Learning for Vision-Language ModelsCode2
External Knowledge Injection for CLIP-Based Class-Incremental LearningCode2
MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and EditingCode2
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule GenerationCode2
Towards Robust and Generalizable Lensless Imaging with Modular Learned ReconstructionCode2
MM-Retinal V2: Transfer an Elite Knowledge Spark into Fundus Vision-Language PretrainingCode2
Universal Image Restoration Pre-training via Degradation ClassificationCode2
Uni-Sign: Toward Unified Sign Language Understanding at ScaleCode2
NUDT4MSTAR: A Large Dataset and Benchmark Towards Remote Sensing Object Recognition in the WildCode2
Densely Connected Parameter-Efficient Tuning for Referring Image SegmentationCode2
MaskLLM: Learnable Semi-Structured Sparsity for Large Language ModelsCode2
All-in-one foundational models learning across quantum chemical levelsCode2
SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-trainingCode2
Exploring the Effect of Dataset Diversity in Self-Supervised Learning for Surgical Computer VisionCode2
Accessing Vision Foundation Models at ImageNet-level CostsCode2
AddressCLIP: Empowering Vision-Language Models for City-wide Image Address LocalizationCode2
HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient TuningCode2
AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scansCode2
T-FREE: Subword Tokenizer-Free Generative LLMs via Sparse Representations for Memory-Efficient EmbeddingsCode2
Automated MRI Quality Assessment of Brain T1-weighted MRI in Clinical Data Warehouses: A Transfer Learning Approach Relying on Artefact SimulationCode2
Large Scale Transfer Learning for Tabular Data via Language ModelingCode2
Show:102550
← PrevPage 3 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified