SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 52515275 of 10307 papers

TitleStatusHype
Representation learning from videos in-the-wild: An object-centric approach0
Representation Purification for End-to-End Speech Translation0
Representations and Strategies for Transferable Machine Learning Models in Chemical Discovery0
Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning0
Representation Topology Divergence: A Method for Comparing Neural Network Representations.0
Representation Transfer by Optimal Transport0
Representation Transfer Learning via Multiple Pre-trained models for Linear Regression0
Re-presenting a Story by Emotional Factors using Sentimental Analysis Method0
Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model0
Reprogramming Language Models for Molecular Representation Learning0
Repurposing 2D Diffusion Models with Gaussian Atlas for 3D Generation0
Repurposing Decoder-Transformer Language Models for Abstractive Summarization0
Research Frontiers in Transfer Learning -- a systematic and bibliometric review0
Research on Cloud Platform Network Traffic Monitoring and Anomaly Detection System based on Large Language Models0
Research on Task Discovery for Transfer Learning in Deep Neural Networks0
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning0
Resetting the baseline: CT-based COVID-19 diagnosis with Deep Transfer Learning is not as accurate as widely thought0
Residual Learning Inspired Crossover Operator and Strategy Enhancements for Evolutionary Multitasking0
Enhancing Ship Classification in Optical Satellite Imagery: Integrating Convolutional Block Attention Module with ResNet for Improved Performance0
Resource-efficient domain adaptive pre-training for medical images0
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature Fusion0
Resources and Experiments on Sentiment Classification for Georgian0
Response by the Montreal AI Ethics Institute to the European Commission's Whitepaper on AI0
Restricted Orthogonal Gradient Projection for Continual Learning0
RE-Tagger: A light-weight Real-Estate Image Classifier0
Show:102550
← PrevPage 211 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified