SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 551575 of 10307 papers

TitleStatusHype
CreoleVal: Multilingual Multitask Benchmarks for CreolesCode1
A Qualitative Evaluation of Language Models on Automatic Question-Answering for COVID-19Code1
AquaVision: Automating the detection of waste in water bodies using deep transfer learningCode1
ArMATH: a Dataset for Solving Arabic Math Word ProblemsCode1
APTv2: Benchmarking Animal Pose Estimation and Tracking with a Large-scale Dataset and BeyondCode1
Adaptive Consistency Regularization for Semi-Supervised Transfer LearningCode1
AquilaMoE: Efficient Training for MoE Models with Scale-Up and Scale-Out StrategiesCode1
A Simple yet Effective Framework for Few-Shot Aspect-Based Sentiment AnalysisCode1
Critical Thinking for Language ModelsCode1
CUDA: Convolution-based Unlearnable DatasetsCode1
Deep comparisons of Neural Networks from the EEGNet familyCode1
Co-Tuning for Transfer LearningCode1
Convolutional Bypasses Are Better Vision Transformer AdaptersCode1
COVID-19 detection from scarce chest x-ray image data using few-shot deep learning approachCode1
Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNetsCode1
A CNN-Based Blind Denoising Method for Endoscopic ImagesCode1
ConvLab-3: A Flexible Dialogue System Toolkit Based on a Unified Data FormatCode1
Contrastive Learning with Synthetic PositivesCode1
Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-ExpertsCode1
Contrastive Representation DistillationCode1
Aligning Medical Images with General Knowledge from Large Language ModelsCode1
Aligning Pretraining for Detection via Object-Level Contrastive LearningCode1
Convolutional Neural Networks for Classification of Alzheimer's Disease: Overview and Reproducible EvaluationCode1
Cooperative Self-training of Machine Reading ComprehensionCode1
ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet AccuracyCode1
Show:102550
← PrevPage 23 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified