SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 951975 of 10307 papers

TitleStatusHype
A Token is Worth over 1,000 Tokens: Efficient Knowledge Distillation through Low-Rank CloneCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
Densely Guided Knowledge Distillation using Multiple Teacher AssistantsCode1
DenseShift: Towards Accurate and Efficient Low-Bit Power-of-Two QuantizationCode1
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural NetworksCode1
DREAM+: Efficient Dataset Distillation by Bidirectional Representative MatchingCode1
MultiEYE: Dataset and Benchmark for OCT-Enhanced Retinal Disease Recognition from Fundus ImagesCode1
MULTIFLOW: Shifting Towards Task-Agnostic Vision-Language PruningCode1
Accuracy enhancement method for speech emotion recognition from spectrogram using temporal frequency correlation and positional information learning through knowledge transferCode1
ArMATH: a Dataset for Solving Arabic Math Word ProblemsCode1
Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-TuningCode1
MultiLoKo: a multilingual local knowledge benchmark for LLMs spanning 31 languagesCode1
Dual Transfer Learning for Event-based End-task Prediction via Pluggable Event to Image TranslationCode1
Multimodal Side-Tuning for Document ClassificationCode1
Multinational Address Parsing: A Zero-Shot EvaluationCode1
Multiple-Input Fourier Neural Operator (MIFNO) for source-dependent 3D elastodynamicsCode1
EEG-Reptile: An Automatized Reptile-Based Meta-Learning Library for BCIsCode1
Multiresolution Convolutional AutoencodersCode1
Accurate Clinical Toxicity Prediction using Multi-task Deep Neural Nets and Contrastive Molecular ExplanationsCode1
MultiTACRED: A Multilingual Version of the TAC Relation Extraction DatasetCode1
AdaBoost-CNN: An adaptive boosting algorithm for convolutional neural networks to classify multi-class imbalanced datasets using transfer learningCode1
Detection and Classification of Diabetic Retinopathy using Deep Learning Algorithms for Segmentation to Facilitate Referral Recommendation for Test and Treatment PredictionCode1
ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face SynthesisCode1
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from TransformerCode1
Efficient Visual Pretraining with Contrastive DetectionCode1
Show:102550
← PrevPage 39 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified