SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 28012825 of 10307 papers

TitleStatusHype
Deep Transfer Learning for Multiple Class Novelty DetectionCode0
Deep Transfer Learning for Multi-source Entity Linkage via Domain AdaptationCode0
Facial Landmark Predictions with Applications to MetaverseCode0
Facial Emotion Recognition Under Mask Coverage Using a Data Augmentation TechniqueCode0
Facilitating the sharing of electrophysiology data analysis results through in-depth provenance captureCode0
fairseq S2T: Fast Speech-to-Text Modeling with fairseqCode0
Fleet Control using Coregionalized Gaussian Process Policy IterationCode0
Manipulating Transfer Learning for Property InferenceCode0
GLoMo: Unsupervisedly Learned Relational Graphs as Transferable RepresentationsCode0
Improving Response Time of Home IoT Services in Federated LearningCode0
Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential RecommendationCode0
Bayesian Multi-Task Transfer Learning for Soft Prompt TuningCode0
A Little Annotation does a Lot of Good: A Study in Bootstrapping Low-resource Named Entity RecognizersCode0
Uncovering the Handwritten Text in the Margins: End-to-end Handwritten Text Detection and RecognitionCode0
Cross-domain Transfer Learning and State Inference for Soft Robots via a Semi-supervised Sequential Variational Bayes FrameworkCode0
Extending LLMs to New Languages: A Case Study of Llama and Persian AdaptationCode0
Alioth: A Machine Learning Based Interference-Aware Performance Monitor for Multi-Tenancy Applications in Public CloudCode0
Transformers on Multilingual Clause-Level MorphologyCode0
Speech foundation models in healthcare: Effect of layer selection on pathological speech feature predictionCode0
Cross-Domain Self-supervised Multi-task Feature Learning using Synthetic ImageryCode0
Exploring the Limits of Weakly Supervised PretrainingCode0
Exploring the potential of transfer learning for metamodels of heterogeneous material deformationCode0
Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table TransformersCode0
3DLaneNAS: Neural Architecture Search for Accurate and Light-Weight 3D Lane DetectionCode0
Exploring the Benefits of Visual Prompting in Differential PrivacyCode0
Show:102550
← PrevPage 113 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified