SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 726750 of 10307 papers

TitleStatusHype
Evaluating histopathology transfer learning with ChampKitCode1
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge DistillationCode1
APT-36K: A Large-scale Benchmark for Animal Pose Estimation and TrackingCode1
Toward Real-world Single Image Deraining: A New Benchmark and BeyondCode1
CFA: Coupled-hypersphere-based Feature Adaptation for Target-Oriented Anomaly LocalizationCode1
SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEGCode1
Multi-Aspect Transfer Learning for Detecting Low Resource Mental Disorders on Social MediaCode1
Pars-ABSA: a Manually Annotated Aspect-based Sentiment Analysis Benchmark on Farsi Product ReviewsCode1
ArMATH: a Dataset for Solving Arabic Math Word ProblemsCode1
Transfer without ForgettingCode1
HiViT: Hierarchical Vision Transformer Meets Masked Image ModelingCode1
SupMAE: Supervised Masked Autoencoders Are Efficient Vision LearnersCode1
Semantic-aware Dense Representation Learning for Remote Sensing Image Change DetectionCode1
Spatio-Temporal Graph Few-Shot Learning with Cross-City Knowledge TransferCode1
Linear Connectivity Reveals Generalization StrategiesCode1
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft PromptsCode1
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual TransferCode1
Thalamus: a brain-inspired algorithm for biologically-plausible continual learning and disentangled representationsCode1
The Geometry of Multilingual Language Model RepresentationsCode1
Vision Transformers in 2022: An Update on Tiny ImageNetCode1
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative PriorsCode1
Global Contrast Masked Autoencoders Are Powerful Pathological Representation LearnersCode1
A unified framework for dataset shift diagnosticsCode1
Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical ImagingCode1
AutoKE: An automatic knowledge embedding framework for scientific machine learningCode1
Show:102550
← PrevPage 30 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified