SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 46014625 of 10307 papers

TitleStatusHype
Histology Virtual Staining with Mask-Guided Adversarial Transfer Learning for Tertiary Lymphoid Structure Detection0
HistoTransfer: Understanding Transfer Learning for Histopathology0
HMAE: Self-Supervised Few-Shot Learning for Quantum Spin Systems0
Holistic Multi-Slice Framework for Dynamic Simultaneous Multi-Slice MRI Reconstruction0
HoloFed: Environment-Adaptive Positioning via Multi-band Reconfigurable Holographic Surfaces and Federated Learning0
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers0
Homomorphisms Between Transfer, Multi-Task, and Meta-Learning Systems0
Homophily and missing links in citation networks0
Hot PATE: Private Aggregation of Distributions for Diverse Task0
On Transfer of Adversarial Robustness from Pretraining to Downstream Tasks0
How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?0
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?0
How Different Text-preprocessing Techniques Using The BERT Model Affect The Gender Profiling of Authors0
How Does Adversarial Fine-Tuning Benefit BERT?0
How does a Multilingual LM Handle Multiple Languages?0
How does Architecture Influence the Base Capabilities of Pre-trained Language Models? A Case Study Based on FFN-Wider and MoE Transformers0
How Does Data Diversity Shape the Weight Landscape of Neural Networks?0
Model-guided Multi-path Knowledge Aggregation for Aerial Saliency Prediction0
How Effective is Pre-training of Large Masked Autoencoders for Downstream Earth Observation Tasks?0
How Far Can We Go with Data Selection? A Case Study on Semantic Sequence Tagging Tasks0
How Lightweight Can A Vision Transformer Be0
How many Observations are Enough? Knowledge Distillation for Trajectory Forecasting0
How much data do I need? A case study on medical data0
How Much Off-The-Shelf Knowledge Is Transferable From Natural Images To Pathology Images?0
How Self-Supervised Learning Can be Used for Fine-Grained Head Pose Estimation?0
Show:102550
← PrevPage 185 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified