SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 51100 of 10307 papers

TitleStatusHype
CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive SurveyCode2
TransST: Transfer Learning Embedded Spatial Factor Modeling of Spatial Transcriptomics DataCode2
A Survey on Remote Sensing Foundation Models: From Vision to MultimodalityCode2
Teaching LMMs for Image Quality Scoring and InterpretingCode2
MMRL: Multi-Modal Representation Learning for Vision-Language ModelsCode2
External Knowledge Injection for CLIP-Based Class-Incremental LearningCode2
MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and EditingCode2
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule GenerationCode2
Towards Robust and Generalizable Lensless Imaging with Modular Learned ReconstructionCode2
MM-Retinal V2: Transfer an Elite Knowledge Spark into Fundus Vision-Language PretrainingCode2
Universal Image Restoration Pre-training via Degradation ClassificationCode2
Uni-Sign: Toward Unified Sign Language Understanding at ScaleCode2
NUDT4MSTAR: A Large Dataset and Benchmark Towards Remote Sensing Object Recognition in the WildCode2
Densely Connected Parameter-Efficient Tuning for Referring Image SegmentationCode2
MaskLLM: Learnable Semi-Structured Sparsity for Large Language ModelsCode2
All-in-one foundational models learning across quantum chemical levelsCode2
SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-trainingCode2
Exploring the Effect of Dataset Diversity in Self-Supervised Learning for Surgical Computer VisionCode2
Accessing Vision Foundation Models at ImageNet-level CostsCode2
AddressCLIP: Empowering Vision-Language Models for City-wide Image Address LocalizationCode2
HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient TuningCode2
AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scansCode2
T-FREE: Subword Tokenizer-Free Generative LLMs via Sparse Representations for Memory-Efficient EmbeddingsCode2
Automated MRI Quality Assessment of Brain T1-weighted MRI in Clinical Data Warehouses: A Transfer Learning Approach Relying on Artefact SimulationCode2
Large Scale Transfer Learning for Tabular Data via Language ModelingCode2
Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge TransferCode2
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic PredictionCode2
XTrack: Multimodal Training Boosts RGB-X Video Object TrackersCode2
MRSegmentator: Multi-Modality Segmentation of 40 Classes in MRI and CTCode2
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image SegmentationCode2
DinoBloom: A Foundation Model for Generalizable Cell Embeddings in HematologyCode2
Pre-trained Vision and Language Transformers Are Few-Shot Incremental LearnersCode2
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance FieldsCode2
Quantformer: from attention to profit with a quantitative transformer trading strategyCode2
An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated LearningCode2
HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context InteractionCode2
AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit DetectorsCode2
Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud AnalysisCode2
CARTE: Pretraining and Transfer for Tabular LearningCode2
CLAP: Learning Transferable Binary Code Representations with Natural Language SupervisionCode2
VOLoc: Visual Place Recognition by Querying Compressed Lidar MapCode2
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network GenerationCode2
An end-to-end attention-based approach for learning on graphsCode2
PLAPT: Protein-Ligand Binding Affinity Prediction Using Pretrained TransformersCode2
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph TransformersCode2
Graph Domain Adaptation: Challenges, Progress and ProspectsCode2
Finetuning Large Language Models for Vulnerability DetectionCode2
MMA: Multi-Modal Adapter for Vision-Language ModelsCode2
Any-point Trajectory Modeling for Policy LearningCode2
Show:102550
← PrevPage 2 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified