SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 86768700 of 10307 papers

TitleStatusHype
Accelerating Malware Classification: A Vision Transformer SolutionCode0
Accelerating Transfer Learning with Near-Data Computation on Cloud Object StoresCode0
Accounts of using the Tustin-Net architecture on a rotary inverted pendulumCode0
ACE: Zero-Shot Image to Image Translation via Pretrained Auto-Contrastive-EncoderCode0
A Combinatorial Perspective on Transfer LearningCode0
A Common Semantic Space for Monolingual and Cross-Lingual Meta-EmbeddingsCode0
A Comparative Analysis of Machine Learning Approaches for Automated Face Mask Detection During COVID-19Code0
Advance Warning Methodologies for COVID-19 using Chest X-Ray ImagesCode0
A Comparison between Named Entity Recognition Models in the Biomedical DomainCode0
A comparison of small sample methods for Handshape RecognitionCode0
A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical TransformerCode0
A Contrastive Knowledge Transfer Framework for Model Compression and Transfer LearningCode0
Action Priors for Large Action Spaces in RoboticsCode0
Action Quality Assessment Across Multiple ActionsCode0
Action Recognition Using Temporal Shift Module and Ensemble LearningCode0
Actor-Mimic: Deep Multitask and Transfer Reinforcement LearningCode0
ADA-Net: Attention-Guided Domain Adaptation Network with Contrastive Learning for Standing Dead Tree Segmentation Using Aerial ImageryCode0
Adaptation of Deep Bidirectional Multilingual Transformers for Russian LanguageCode0
Adaptation of Tacotron2-based Text-To-Speech for Articulatory-to-Acoustic Mapping using Ultrasound Tongue ImagingCode0
Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer LearningCode0
Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer LearningCode0
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity Matching using Adapter-tuningCode0
Adapting Monolingual Models: Data can be Scarce when Language Similarity is HighCode0
Adapting Multilingual LLMs to Low-Resource Languages with Knowledge Graphs via AdaptersCode0
Adapting Pre-trained Language Models to Vision-Language Tasks via Dynamic Visual PromptingCode0
Show:102550
← PrevPage 348 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified