SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 35263550 of 10307 papers

TitleStatusHype
Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation0
Federated Learning without Full Labels: A Survey0
AgileGAN3D: Few-Shot 3D Portrait Stylization by Augmented Transfer Learning0
SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization0
Decoupled Multimodal Distilling for Emotion RecognitionCode1
Unknown Sniffer for Object Detection: Don't Turn a Blind Eye to Unknown ObjectsCode1
Convolutional Neural Networks for the classification of glitches in gravitational-wave data streams0
MSdocTr-Lite: A Lite Transformer for Full Page Multi-script Handwriting Recognition0
An embedding for EEG signals learned using a triplet loss0
Compositional Zero-Shot Domain Transfer with Text-to-Text Models0
An Efficient Knowledge Transfer Strategy for Spiking Neural Networks from Static to Event DomainCode1
DetOFA: Efficient Training of Once-for-All Networks for Object Detection Using Path Filter0
Parameter-Efficient Sparse Retrievers and Rerankers using Adapters0
A Closer Look at Model Adaptation using Feature Distortion and Simplicity Bias0
Leveraging Multi-time Hamilton-Jacobi PDEs for Certain Scientific Machine Learning ProblemsCode0
Exploring the Benefits of Visual Prompting in Differential PrivacyCode0
Automatically Predict Material Properties with Microscopic Image Example Polymer Compatibility0
Generate labeled training data using Prompt Programming and GPT-3. An example of Big Five Personality Classification0
Label-Efficient Deep Learning in Medical Image Analysis: Challenges and Future Directions0
Fine-tuning ClimateBert transformer with ClimaText for the disclosure analysis of climate-related financial risks0
Fix the Noise: Disentangling Source Feature for Controllable Domain TranslationCode1
Full or Weak annotations? An adaptive strategy for budget-constrained annotation campaigns0
Out of Thin Air: Exploring Data-Free Adversarial Robustness Distillation0
Manipulating Transfer Learning for Property InferenceCode0
ViC-MAE: Self-Supervised Representation Learning from Images and Video with Contrastive Masked AutoencodersCode0
Show:102550
← PrevPage 142 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified