SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 426450 of 10307 papers

TitleStatusHype
OceanBench: The Sea Surface Height EditionCode1
Confidence-based Visual Dispersal for Few-shot Unsupervised Domain AdaptationCode1
DistillBEV: Boosting Multi-Camera 3D Object Detection with Cross-Modal Knowledge DistillationCode1
GraphAdapter: Tuning Vision-Language Models With Dual Knowledge GraphCode1
A Text Classification-Based Approach for Evaluating and Enhancing the Machine Interpretability of Building CodesCode1
Long-tail Augmented Graph Contrastive Learning for RecommendationCode1
GECTurk: Grammatical Error Correction and Detection Dataset for TurkishCode1
NoisyNN: Exploring the Impact of Information Entropy Change in Learning SystemsCode1
Fine-Tuning Self-Supervised Learning Models for End-to-End Pronunciation ScoringCode1
SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient ChannelsCode1
Salient Object Detection in Optical Remote Sensing Images Driven by TransformerCode1
Nucleus-aware Self-supervised Pretraining Using Unpaired Image-to-image Translation for Histopathology ImagesCode1
NineRec: A Benchmark Dataset Suite for Evaluating Transferable RecommendationCode1
Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer LearningCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
A Strong and Simple Deep Learning Baseline for BCI MI DecodingCode1
Overcoming Data Limitations: A Few-Shot Specific Emitter Identification Method Using Self-Supervised Learning and Adversarial AugmentationCode1
QS-TTS: Towards Semi-Supervised Text-to-Speech Synthesis via Vector-Quantized Self-Supervised Speech Representation LearningCode1
Document AI: A Comparative Study of Transformer-Based, Graph-Based Models, and Convolutional Neural Networks For Document Layout AnalysisCode1
A General-Purpose Self-Supervised Model for Computational PathologyCode1
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and MemoryCode1
Exploring the Transfer Learning Capabilities of CLIP in Domain Generalization for Diabetic RetinopathyCode1
Transfer Learning for Microstructure Segmentation with CS-UNet: A Hybrid Algorithm with Transformer and CNN EncodersCode1
RestNet: Boosting Cross-Domain Few-Shot Segmentation with Residual Transformation NetworkCode1
Parameter-Efficient Transfer Learning for Remote Sensing Image-Text RetrievalCode1
Show:102550
← PrevPage 18 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified