SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 52515300 of 10307 papers

TitleStatusHype
Representation learning from videos in-the-wild: An object-centric approach0
Representation Purification for End-to-End Speech Translation0
Representations and Strategies for Transferable Machine Learning Models in Chemical Discovery0
Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning0
Representation Topology Divergence: A Method for Comparing Neural Network Representations.0
Representation Transfer by Optimal Transport0
Representation Transfer Learning via Multiple Pre-trained models for Linear Regression0
Re-presenting a Story by Emotional Factors using Sentimental Analysis Method0
Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model0
Reprogramming Language Models for Molecular Representation Learning0
Repurposing 2D Diffusion Models with Gaussian Atlas for 3D Generation0
Repurposing Decoder-Transformer Language Models for Abstractive Summarization0
Research Frontiers in Transfer Learning -- a systematic and bibliometric review0
Research on Cloud Platform Network Traffic Monitoring and Anomaly Detection System based on Large Language Models0
Research on Task Discovery for Transfer Learning in Deep Neural Networks0
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning0
Resetting the baseline: CT-based COVID-19 diagnosis with Deep Transfer Learning is not as accurate as widely thought0
Residual Learning Inspired Crossover Operator and Strategy Enhancements for Evolutionary Multitasking0
Enhancing Ship Classification in Optical Satellite Imagery: Integrating Convolutional Block Attention Module with ResNet for Improved Performance0
Resource-efficient domain adaptive pre-training for medical images0
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion0
Resources and Experiments on Sentiment Classification for Georgian0
Response by the Montreal AI Ethics Institute to the European Commission's Whitepaper on AI0
Restricted Orthogonal Gradient Projection for Continual Learning0
RE-Tagger: A light-weight Real-Estate Image Classifier0
Rethinking Continual Learning for Autonomous Agents and Robots0
Rethinking Efficient Tuning Methods from a Unified Perspective0
Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning0
Rethinking Image-to-Video Adaptation: An Object-centric Perspective0
Rethinking Importance Weighting for Transfer Learning0
Rethinking Membership Inference Attacks Against Transfer Learning0
Rethinking Query, Key, and Value Embedding in Vision Transformer under Tiny Model Constraints0
Rethinking the Role of Operating Conditions for Learning-based Multi-condition Fault Diagnosis0
Rethinking Transfer and Auxiliary Learning for Improving Audio Captioning Transformer0
Rethinking Two Consensuses of the Transferability in Deep Learning0
Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression0
Reuse and Adaptation for Entity Resolution through Transfer Learning0
Reuse of Neural Modules for General Video Game Playing0
Reusing Neural Speech Representations for Auditory Emotion Recognition0
RevCD -- Reversed Conditional Diffusion for Generalized Zero-Shot Learning0
Revealing economic facts: LLMs know more than they say0
Revealing Fine Structures of the Retinal Receptive Field by Deep Learning Networks0
Revealing Secrets From Pre-trained Models0
Reverse Probing: Evaluating Knowledge Transfer via Finetuned Task Embeddings for Coreference Resolution0
Reverse Transfer Learning: Can Word Embeddings Trained for Different NLP Tasks Improve Neural Language Models?0
Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator0
Review of Deep Representation Learning Techniques for Brain-Computer Interfaces and Recommendations0
Revised Regularization for Efficient Continual Learning through Correlation-Based Parameter Update in Bayesian Neural Networks0
Revisiting Classical Bagging with Modern Transfer Learning for On-the-fly Disaster Damage Detector0
Revisiting Euclidean Alignment for Transfer Learning in EEG-Based Brain-Computer Interfaces0
Show:102550
← PrevPage 106 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified