SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 401450 of 10307 papers

TitleStatusHype
DDAM-PS: Diligent Domain Adaptive Mixer for Person SearchCode1
Graph Neural Networks for Road Safety Modeling: Datasets and Evaluations for Accident AnalysisCode1
Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation ModelsCode1
CreoleVal: Multilingual Multitask Benchmarks for CreolesCode1
Label-Only Model Inversion Attacks via Knowledge TransferCode1
BirdSAT: Cross-View Contrastive Masked Autoencoders for Bird Species Classification and MappingCode1
CPIA Dataset: A Comprehensive Pathological Image Analysis Dataset for Self-supervised Learning Pre-trainingCode1
Deep Learning on SAR Imagery: Transfer Learning Versus Randomly Initialized WeightsCode1
PETA: Evaluating the Impact of Protein Transfer Learning with Sub-word Tokenization on Downstream ApplicationsCode1
LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge RecoveryCode1
DREAM+: Efficient Dataset Distillation by Bidirectional Representative MatchingCode1
On the Transferability of Visually Grounded PCFGsCode1
Enhancing High-Resolution 3D Generation through Pixel-wise Gradient ClippingCode1
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric PerspectiveCode1
Unlocking Emergent Modularity in Large Language ModelsCode1
A Recent Survey of Heterogeneous Transfer LearningCode1
EViT: An Eagle Vision Transformer with Bi-Fovea Self-AttentionCode1
Self-Supervised Dataset Distillation for Transfer LearningCode1
Efficient Adaptation of Large Vision Transformer via Adapter Re-ComposingCode1
A Simple and Robust Framework for Cross-Modality Medical Image Segmentation applied to Vision TransformersCode1
Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps DomainCode1
LumiNet: The Bright Side of Perceptual Knowledge DistillationCode1
SemiReward: A General Reward Model for Semi-supervised LearningCode1
Towards Distribution-Agnostic Generalized Category DiscoveryCode1
Mixup Your Own PairsCode1
OceanBench: The Sea Surface Height EditionCode1
Confidence-based Visual Dispersal for Few-shot Unsupervised Domain AdaptationCode1
DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal Knowledge DistillationCode1
GraphAdapter: Tuning Vision-Language Models With Dual Knowledge GraphCode1
A Text Classification-Based Approach for Evaluating and Enhancing the Machine Interpretability of Building CodesCode1
Long-tail Augmented Graph Contrastive Learning for RecommendationCode1
GECTurk: Grammatical Error Correction and Detection Dataset for TurkishCode1
NoisyNN: Exploring the Impact of Information Entropy Change in Learning SystemsCode1
Fine-Tuning Self-Supervised Learning Models for End-to-End Pronunciation ScoringCode1
SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient ChannelsCode1
Salient Object Detection in Optical Remote Sensing Images Driven by TransformerCode1
Nucleus-aware Self-supervised Pretraining Using Unpaired Image-to-image Translation for Histopathology ImagesCode1
NineRec: A Benchmark Dataset Suite for Evaluating Transferable RecommendationCode1
Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer LearningCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
A Strong and Simple Deep Learning Baseline for BCI MI DecodingCode1
Overcoming Data Limitations: A Few-Shot Specific Emitter Identification Method Using Self-Supervised Learning and Adversarial AugmentationCode1
QS-TTS: Towards Semi-Supervised Text-to-Speech Synthesis via Vector-Quantized Self-Supervised Speech Representation LearningCode1
Document AI: A Comparative Study of Transformer-Based, Graph-Based Models, and Convolutional Neural Networks For Document Layout AnalysisCode1
A General-Purpose Self-Supervised Model for Computational PathologyCode1
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and MemoryCode1
Exploring the Transfer Learning Capabilities of CLIP in Domain Generalization for Diabetic RetinopathyCode1
Transfer Learning for Microstructure Segmentation with CS-UNet: A Hybrid Algorithm with Transformer and CNN EncodersCode1
RestNet: Boosting Cross-Domain Few-Shot Segmentation with Residual Transformation NetworkCode1
Parameter-Efficient Transfer Learning for Remote Sensing Image-Text RetrievalCode1
Show:102550
← PrevPage 9 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified