SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 501550 of 10307 papers

TitleStatusHype
FoPro-KD: Fourier Prompted Effective Knowledge Distillation for Long-Tailed Medical Image RecognitionCode1
Matrix Information Theory for Self-Supervised LearningCode1
Distilling BlackBox to Interpretable models for Efficient Transfer LearningCode1
TOAST: Transfer Learning via Attention SteeringCode1
Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement LearningCode1
Exploring Adapter-based Transfer Learning for Recommender Systems: Empirical Studies and Practical InsightsCode1
Bert4XMR: Cross-Market Recommendation with Bidirectional Encoder Representations from TransformerCode1
ComSL: A Composite Speech-Language Model for End-to-End Speech-to-Text TranslationCode1
CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language ModelsCode1
Improving few-shot learning-based protein engineering with evolutionary samplingCode1
MetaAdapt: Domain Adaptive Few-Shot Misinformation Detection via Meta LearningCode1
Revisiting pre-trained remote sensing model benchmarks: resizing and normalization mattersCode1
Denoised Self-Augmented Learning for Social RecommendationCode1
PTGB: Pre-Train Graph Neural Networks for Brain Network AnalysisCode1
PromptNER: A Prompting Method for Few-shot Named Entity Recognition via k Nearest Neighbor SearchCode1
Efficient ConvBN Blocks for Transfer Learning and BeyondCode1
One-Prompt to Segment All Medical ImagesCode1
AD-KD: Attribution-Driven Knowledge Distillation for Language Model CompressionCode1
Real-Time Flying Object Detection with YOLOv8Code1
Tailoring Instructions to Student's Learning Levels Boosts Knowledge DistillationCode1
An Ensemble Approach for Automated Theorem Proving Based on Efficient Name Invariant Graph Neural RepresentationsCode1
CLIP-VG: Self-paced Curriculum Adapting of CLIP for Visual GroundingCode1
A Whisper transformer for audio captioning trained with synthetic captions and transfer learningCode1
Serial Contrastive Knowledge Distillation for Continual Few-shot Relation ExtractionCode1
Improving Implicit Feedback-Based Recommendation through Multi-Behavior AlignmentCode1
MultiTACRED: A Multilingual Version of the TAC Relation Extraction DatasetCode1
PointCMP: Contrastive Mask Prediction for Self-supervised Learning on Point Cloud VideosCode1
Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with UncertaintyCode1
Semi-supervised Domain Adaptation via Prototype-based Multi-level LearningCode1
Improving Contrastive Learning of Sentence Embeddings from AI FeedbackCode1
Shotgun crystal structure prediction using machine-learned formation energiesCode1
Polyp-SAM: Transfer SAM for Polyp SegmentationCode1
π-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task InterpolationCode1
Deep Fast Vision: Accelerated Deep Transfer Learning Vision Prototyping and BeyondCode1
SCoDA: Domain Adaptive Shape Completion for Real ScansCode1
Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided Visual Foundation ModelsCode1
RS2G: Data-Driven Scene-Graph Extraction and Embedding for Robust Autonomous Perception and Scenario UnderstandingCode1
Exploring Incompatible Knowledge Transfer in Few-shot Image GenerationCode1
Model Sparsity Can Simplify Machine UnlearningCode1
The MONET dataset: Multimodal drone thermal dataset recorded in rural scenariosCode1
Selective Knowledge Sharing for Privacy-Preserving Federated Distillation without A Good TeacherCode1
I2I: Initializing Adapters with Improvised KnowledgeCode1
Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge ExcavationCode1
Towards Foundation Models and Few-Shot Parameter-Efficient Fine-Tuning for Volumetric Organ SegmentationCode1
Accelerated wind farm yaw and layout optimisation with multi-fidelity deep transfer learning wake modelsCode1
Quantifying the Impact of Data Characteristics on the Transferability of Sleep Stage Scoring ModelsCode1
HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language ModelsCode1
Model-Based Reinforcement Learning with Isolated ImaginationsCode1
BlackVIP: Black-Box Visual Prompting for Robust Transfer LearningCode1
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Show:102550
← PrevPage 11 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified