SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 401425 of 10307 papers

TitleStatusHype
DDAM-PS: Diligent Domain Adaptive Mixer for Person SearchCode1
Graph Neural Networks for Road Safety Modeling: Datasets and Evaluations for Accident AnalysisCode1
CreoleVal: Multilingual Multitask Benchmarks for CreolesCode1
Label-Only Model Inversion Attacks via Knowledge TransferCode1
Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation ModelsCode1
BirdSAT: Cross-View Contrastive Masked Autoencoders for Bird Species Classification and MappingCode1
CPIA Dataset: A Comprehensive Pathological Image Analysis Dataset for Self-supervised Learning Pre-trainingCode1
PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream ApplicationsCode1
Deep Learning on SAR Imagery: Transfer Learning Versus Randomly Initialized WeightsCode1
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge RecoveryCode1
DREAM+: Efficient Dataset Distillation by Bidirectional Representative MatchingCode1
On the Transferability of Visually Grounded PCFGsCode1
Enhancing High-Resolution 3D Generation through Pixel-wise Gradient ClippingCode1
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric PerspectiveCode1
Unlocking Emergent Modularity in Large Language ModelsCode1
A Recent Survey of Heterogeneous Transfer LearningCode1
EViT: An Eagle Vision Transformer with Bi-Fovea Self-AttentionCode1
Self-Supervised Dataset Distillation for Transfer LearningCode1
Efficient Adaptation of Large Vision Transformer via Adapter Re-ComposingCode1
A Simple and Robust Framework for Cross-Modality Medical Image Segmentation applied to Vision TransformersCode1
Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps DomainCode1
LumiNet: The Bright Side of Perceptual Knowledge DistillationCode1
SemiReward: A General Reward Model for Semi-supervised LearningCode1
Towards Distribution-Agnostic Generalized Category DiscoveryCode1
Mixup Your Own PairsCode1
Show:102550
← PrevPage 17 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified