SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 10261050 of 10307 papers

TitleStatusHype
A Unified Framework for Domain Adaptive Pose EstimationCode1
Enabling Country-Scale Land Cover Mapping with Meter-Resolution Satellite ImageryCode1
A unified framework for dataset shift diagnosticsCode1
Adapting BERT for Word Sense Disambiguation with Gloss Selection Objective and Example SentencesCode1
AutoInit: Analytic Signal-Preserving Weight Initialization for Neural NetworksCode1
Encapsulating Knowledge in One PromptCode1
Enhancement of price trend trading strategies via image-induced importance weightsCode1
Masking meets Supervision: A Strong Learning AllianceCode1
A Closer Look at Few-shot Classification AgainCode1
Emotion Recognition from Speech Using Wav2vec 2.0 EmbeddingsCode1
AUGNLG: Few-shot Natural Language Generation using Self-trained Data AugmentationCode1
AdaptGuard: Defending Against Universal Attacks for Model AdaptationCode1
EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion RecognitionCode1
Empathetic BERT2BERT Conversational Model: Learning Arabic Language Generation with Little DataCode1
Audio Spoofing Verification using Deep Convolutional Neural Networks by Transfer LearningCode1
Audio-based Near-Duplicate Video Retrieval with Audio Similarity LearningCode1
Audio Embeddings as Teachers for Music ClassificationCode1
AutoKE: An automatic knowledge embedding framework for scientific machine learningCode1
Unlocking Emergent Modularity in Large Language ModelsCode1
Empowering parameter-efficient transfer learning by recognizing the kernel structure in self-attentionCode1
Enhancing High-Resolution 3D Generation through Pixel-wise Gradient ClippingCode1
AttentionHTR: Handwritten Text Recognition Based on Attention Encoder-Decoder NetworksCode1
AdapterHub Playground: Simple and Flexible Few-Shot Learning with AdaptersCode1
Efficient Visual Pretraining with Contrastive DetectionCode1
Attention-Based Deep Learning Framework for Human Activity Recognition with User AdaptationCode1
Show:102550
← PrevPage 42 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified