SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 276300 of 10307 papers

TitleStatusHype
Knowledge Transfer with Simulated Inter-Image Erasing for Weakly Supervised Semantic SegmentationCode1
Towards Learning Abductive Reasoning using VSA Distributed RepresentationsCode1
Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language ModelsCode1
GOAL: A Generalist Combinatorial Optimization Agent LearningCode1
WONDERBREAD: A Benchmark for Evaluating Multimodal Foundation Models on Business Process Management TasksCode1
BIOSCAN-5M: A Multimodal Dataset for Insect BiodiversityCode1
UniGLM: Training One Unified Language Model for Text-Attributed Graph EmbeddingCode1
Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language RecognitionCode1
Industrial Language-Image Dataset (ILID): Adapting Vision Foundation Models for Industrial SettingsCode1
MSAGPT: Neural Prompting Protein Structure Prediction via MSA Generative Pre-TrainingCode1
InaGVAD : a Challenging French TV and Radio Corpus Annotated for Speech Activity Detection and Speaker Gender SegmentationCode1
LLMEmbed: Rethinking Lightweight LLM's Genuine Function in Text ClassificationCode1
Multi-Task Multi-Scale Contrastive Knowledge Distillation for Efficient Medical Image SegmentationCode1
Leveraging Predicate and Triplet Learning for Scene Graph GenerationCode1
Source Code Foundation Models are Transferable Binary Analysis Knowledge BasesCode1
MDS-ViTNet: Improving saliency prediction for Eye-Tracking with Vision TransformerCode1
LoGAH: Predicting 774-Million-Parameter Transformers using Graph HyperNetworks with 1/100 ParametersCode1
Implicit In-context LearningCode1
Fine-grained Image-to-LiDAR Contrastive Distillation with Visual Foundation ModelsCode1
Boosted Neural Decoders: Achieving Extreme Reliability of LDPC Codes for 6G NetworksCode1
Towards Foundation Model for Chemical Reactor Modeling: Meta-Learning with Physics-Informed AdaptationCode1
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic AnchorsCode1
Subject-Adaptive Transfer Learning Using Resting State EEG Signals for Cross-Subject EEG Motor Imagery ClassificationCode1
CloudS2Mask: A novel deep learning approach for improved cloud and cloud shadow masking in Sentinel-2 imageryCode1
Feature-based Federated Transfer Learning: Communication Efficiency, Robustness and PrivacyCode1
Show:102550
← PrevPage 12 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified