SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 351400 of 10307 papers

TitleStatusHype
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic DataCode1
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural NetworksCode1
Communication-Efficient and Privacy-Preserving Feature-based Federated Transfer LearningCode1
Context-Transformer: Tackling Object Confusion for Few-Shot DetectionCode1
A Comprehensive Study on Torchvision Pre-trained Models for Fine-grained Inter-species ClassificationCode1
Pre-training technique to localize medical BERT and enhance biomedical BERTCode1
A proposal for Multimodal Emotion Recognition using aural transformers and Action Units on RAVDESS datasetCode1
APT-36K: A Large-scale Benchmark for Animal Pose Estimation and TrackingCode1
AquilaMoE: Efficient Training for MoE Models with Scale-Up and Scale-Out StrategiesCode1
AquaVision: Automating the detection of waste in water bodies using deep transfer learningCode1
CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance ComputingCode1
AraT5: Text-to-Text Transformers for Arabic Language GenerationCode1
Contour Knowledge Transfer for Salient Object DetectionCode1
Contrastive Alignment of Vision to Language Through Parameter-Efficient Transfer LearningCode1
AReLU: Attention-based Rectified Linear UnitCode1
Contrastive Embeddings for Neural ArchitecturesCode1
An Empirical Analysis of Image-Based Learning Techniques for Malware ClassificationCode1
A Deep Learning-Based Supervised Transfer Learning Framework for DOA Estimation with Array ImperfectionsCode1
ArMATH: a Dataset for Solving Arabic Math Word ProblemsCode1
ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet AccuracyCode1
A deep learning framework for solution and discovery in solid mechanicsCode1
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from TransformerCode1
A Comprehensive Survey on Transfer LearningCode1
ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face SynthesisCode1
A simple, efficient and scalable contrastive masked autoencoder for learning visual representationsCode1
A Scalable and Generalizable Pathloss Map PredictionCode1
COVID-MobileXpert: On-Device COVID-19 Patient Triage and Follow-up using Chest X-raysCode1
aschern at SemEval-2020 Task 11: It Takes Three to Tango: RoBERTa, CRF, and Transfer LearningCode1
A Simple and Effective Approach to Automatic Post-Editing with Transfer LearningCode1
A Simple and Effective Approach to Automatic Post-Editing with Transfer LearningCode1
CrAM: A Compression-Aware MinimizerCode1
A Simple and Robust Framework for Cross-Modality Medical Image Segmentation applied to Vision TransformersCode1
Boosting Memory Efficiency in Transfer Learning for High-Resolution Medical Image ClassificationCode1
Audio-based Near-Duplicate Video Retrieval with Audio Similarity LearningCode1
COLA: Cross-city Mobility Transformer for Human Trajectory SimulationCode1
Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language ModelsCode1
Continual learning with hypernetworksCode1
Assemble Foundation Models for Automatic Code SummarizationCode1
ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning ParadigmsCode1
AD-KD: Attribution-Driven Knowledge Distillation for Language Model CompressionCode1
CODE-AE: A Coherent De-confounding Autoencoder for Predicting Patient-Specific Drug Response From Cell Line TranscriptomicsCode1
AD-L-JEPA: Self-Supervised Spatial World Models with Joint Embedding Predictive Architecture for Autonomous Driving with LiDAR DataCode1
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional FiltersCode1
Cumulative Spatial Knowledge Distillation for Vision TransformersCode1
A Convolutional LSTM based Residual Network for Deepfake Video DetectionCode1
A Study of Face Obfuscation in ImageNetCode1
CutPaste: Self-Supervised Learning for Anomaly Detection and LocalizationCode1
A Survey: Deep Learning for Hyperspectral Image Classification with Few Labeled SamplesCode1
CODE-CL: Conceptor-Based Gradient Projection for Deep Continual LearningCode1
Making Offline RL Online: Collaborative World Models for Offline Visual Reinforcement LearningCode1
Show:102550
← PrevPage 8 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified