SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 751775 of 10307 papers

TitleStatusHype
Classification of Large-Scale High-Resolution SAR Images with Deep Transfer LearningCode1
An Evaluation of Self-Supervised Pre-Training for Skin-Lesion AnalysisCode1
Classification of Epithelial Ovarian Carcinoma Whole-Slide Pathology Images Using Deep Transfer LearningCode1
An Evolutionary Multitasking Algorithm with Multiple Filtering for High-Dimensional Feature SelectionCode1
A Broader Study of Cross-Domain Few-Shot LearningCode1
CleanNet: Transfer Learning for Scalable Image Classifier Training with Label NoiseCode1
Class-relation Knowledge Distillation for Novel Class DiscoveryCode1
GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural NetworksCode1
CLiMB: A Continual Learning Benchmark for Vision-and-Language TasksCode1
A New Knowledge Distillation Network for Incremental Few-Shot Surface Defect DetectionCode1
CLIP-Lite: Information Efficient Visual Representation Learning with Language SupervisionCode1
Deep Transferring QuantizationCode1
aschern at SemEval-2020 Task 11: It Takes Three to Tango: RoBERTa, CRF, and Transfer LearningCode1
CLIP-VG: Self-paced Curriculum Adapting of CLIP for Visual GroundingCode1
DeezyMatch: A Flexible Deep Learning Approach to Fuzzy String MatchingCode1
Graph Contrastive Learning with AugmentationsCode1
Denoised Self-Augmented Learning for Social RecommendationCode1
CODE-CL: Conceptor-Based Gradient Projection for Deep Continual LearningCode1
CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line TranscriptomicsCode1
Model LEGO: Creating Models Like Disassembling and Assembling Building BlocksCode1
ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet AccuracyCode1
CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance ComputingCode1
Grounding Psychological Shape Space in Convolutional Neural NetworksCode1
GroupContrast: Semantic-aware Self-supervised Representation Learning for 3D UnderstandingCode1
Densely Guided Knowledge Distillation using Multiple Teacher AssistantsCode1
Show:102550
← PrevPage 31 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified