SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 701750 of 10307 papers

TitleStatusHype
Transfer learning for time series classification using synthetic data generationCode1
JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving ScenesCode1
Convolutional Bypasses Are Better Vision Transformer AdaptersCode1
HEAD: HEtero-Assists Distillation for Heterogeneous Object DetectorsCode1
A Data-Based Perspective on Transfer LearningCode1
Scaling Novel Object Detection with Weakly Supervised Detection TransformersCode1
Consecutive Pretraining: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing DomainCode1
PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal Distillation for 3D Shape RecognitionCode1
When does Bias Transfer in Transfer Learning?Code1
Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge TransferCode1
Factorizing Knowledge in Neural NetworksCode1
CL-ReLKT: Cross-lingual Language Knowledge Transfer for Multilingual Retrieval Question AnsweringCode1
No Reason for No Supervision: Improved Generalization in Supervised ModelsCode1
Transfer Learning with Deep Tabular ModelsCode1
ST-Adapter: Parameter-Efficient Image-to-Video Transfer LearningCode1
Automatic identification of segmentation errors for radiotherapy using geometric learningCode1
SFace: Privacy-friendly and Accurate Face Recognition using Synthetic DataCode1
Contextual Squeeze-and-Excitation for Efficient Few-Shot Image ClassificationCode1
Multistream Gaze Estimation with Anatomical Eye Region Isolation by Synthetic to Real Transfer LearningCode1
CLiMB: A Continual Learning Benchmark for Vision-and-Language TasksCode1
CtrlFormer: Learning Transferable State Representation for Visual Control via TransformerCode1
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image ClassificationCode1
Time Interval-enhanced Graph Neural Network for Shared-account Cross-domain Sequential RecommendationCode1
CARLANE: A Lane Detection Benchmark for Unsupervised Domain Adaptation from Simulation to multiple Real-World DomainsCode1
Hybrid thermal modeling of additive manufacturing processes using physics-informed neural networks for temperature prediction and parameter identificationCode1
Evaluating histopathology transfer learning with ChampKitCode1
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge DistillationCode1
APT-36K: A Large-scale Benchmark for Animal Pose Estimation and TrackingCode1
Toward Real-world Single Image Deraining: A New Benchmark and BeyondCode1
CFA: Coupled-hypersphere-based Feature Adaptation for Target-Oriented Anomaly LocalizationCode1
SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEGCode1
Multi-Aspect Transfer Learning for Detecting Low Resource Mental Disorders on Social MediaCode1
Pars-ABSA: a Manually Annotated Aspect-based Sentiment Analysis Benchmark on Farsi Product ReviewsCode1
ArMATH: a Dataset for Solving Arabic Math Word ProblemsCode1
Transfer without ForgettingCode1
HiViT: Hierarchical Vision Transformer Meets Masked Image ModelingCode1
SupMAE: Supervised Masked Autoencoders Are Efficient Vision LearnersCode1
Semantic-aware Dense Representation Learning for Remote Sensing Image Change DetectionCode1
Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge TransferCode1
Linear Connectivity Reveals Generalization StrategiesCode1
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft PromptsCode1
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual TransferCode1
Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representationsCode1
The Geometry of Multilingual Language Model RepresentationsCode1
Vision Transformers in 2022: An Update on Tiny ImageNetCode1
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative PriorsCode1
Global Contrast Masked Autoencoders Are Powerful Pathological Representation LearnersCode1
A unified framework for dataset shift diagnosticsCode1
Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical ImagingCode1
AutoKE: An automatic knowledge embedding framework for scientific machine learningCode1
Show:102550
← PrevPage 15 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified