SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 52765300 of 10307 papers

TitleStatusHype
Rethinking Continual Learning for Autonomous Agents and Robots0
Rethinking Efficient Tuning Methods from a Unified Perspective0
Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning0
Rethinking Image-to-Video Adaptation: An Object-centric Perspective0
Rethinking Importance Weighting for Transfer Learning0
Rethinking Membership Inference Attacks Against Transfer Learning0
Rethinking Query, Key, and Value Embedding in Vision Transformer under Tiny Model Constraints0
Rethinking the Role of Operating Conditions for Learning-based Multi-condition Fault Diagnosis0
Rethinking Transfer and Auxiliary Learning for Improving Audio Captioning Transformer0
Rethinking Two Consensuses of the Transferability in Deep Learning0
Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression0
Reuse and Adaptation for Entity Resolution through Transfer Learning0
Reuse of Neural Modules for General Video Game Playing0
Reusing Neural Speech Representations for Auditory Emotion Recognition0
RevCD -- Reversed Conditional Diffusion for Generalized Zero-Shot Learning0
Revealing economic facts: LLMs know more than they say0
Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks0
Revealing Secrets From Pre-trained Models0
Reverse Probing: Evaluating Knowledge Transfer via Finetuned Task Embeddings for Coreference Resolution0
Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP Tasks Improve Neural Language Models?0
Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator0
Review of Deep Representation Learning Techniques for Brain-Computer Interfaces and Recommendations0
Revised Regularization for Efficient Continual Learning through Correlation-Based Parameter Update in Bayesian Neural Networks0
Revisiting Classical Bagging with Modern Transfer Learning for On-the-fly Disaster Damage Detector0
Revisiting Euclidean Alignment for Transfer Learning in EEG-Based Brain-Computer Interfaces0
Show:102550
← PrevPage 212 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified