SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 51100 of 10307 papers

TitleStatusHype
Lightweight, Pre-trained Transformers for Remote Sensing TimeseriesCode2
Leveraging medical Twitter to build a visual–language foundation model for pathology AICode2
Large Scale Transfer Learning for Tabular Data via Language ModelingCode2
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New OutlooksCode2
Learning Dense Representations of Phrases at ScaleCode2
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPSCode2
An end-to-end attention-based approach for learning on graphsCode2
AdapterFusion: Non-Destructive Task Composition for Transfer LearningCode2
InPars: Data Augmentation for Information Retrieval using Large Language ModelsCode2
How Well Do Sparse Imagenet Models Transfer?Code2
HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context InteractionCode2
LP-MusicCaps: LLM-Based Pseudo Music CaptioningCode2
jiant: A Software Toolkit for Research on General-Purpose Text Understanding ModelsCode2
Graph Domain Adaptation: Challenges, Progress and ProspectsCode2
Quantformer: from attention to profit with a quantitative transformer trading strategyCode2
GroupViT: Semantic Segmentation Emerges from Text SupervisionCode2
Finetuning Large Language Models for Vulnerability DetectionCode2
An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated LearningCode2
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic PredictionCode2
HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient TuningCode2
K-LITE: Learning Transferable Visual Models with External KnowledgeCode2
MaskLLM: Learnable Semi-Structured Sparsity for Large Language ModelsCode2
External Knowledge Injection for CLIP-Based Class-Incremental LearningCode2
Exploring the Effect of Dataset Diversity in Self-Supervised Learning for Surgical Computer VisionCode2
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerCode2
Event Stream-based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel BaselineCode2
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
ExpeL: LLM Agents Are Experiential LearnersCode2
Global birdsong embeddings enable superior transfer learning for bioacoustic classificationCode2
Do MIL Models Transfer?Code2
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network GenerationCode2
Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud AnalysisCode2
Deep Model ReassemblyCode2
Actuarial Applications of Natural Language Processing Using Transformers: Case Studies for Using Text Features in an Actuarial ContextCode2
DinoBloom: A Foundation Model for Generalizable Cell Embeddings in HematologyCode2
Discovery of 2D materials using Transformer Network based Generative DesignCode2
Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A ReviewCode2
Deep Learning-Enabled Semantic Communication Systems with Task-Unaware Transmitter and Dynamic DataCode2
Deep learning for time series classificationCode2
Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge TransferCode2
Densely Connected Parameter-Efficient Tuning for Referring Image SegmentationCode2
Feature Learning in Infinite-Width Neural NetworksCode2
All-in-one foundational models learning across quantum chemical levelsCode2
ExT5: Towards Extreme Multi-Task Scaling for Transfer LearningCode2
3D UX-Net: A Large Kernel Volumetric ConvNet Modernizing Hierarchical Transformer for Medical Image SegmentationCode2
Content-Based Search for Deep Generative ModelsCode2
Continual Pre-training of Language ModelsCode2
Few-shot Knowledge Transfer for Fine-grained Cartoon Face GenerationCode2
Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-trainCode2
CommonCanvas: An Open Diffusion Model Trained with Creative-Commons ImagesCode2
Show:102550
← PrevPage 2 of 207Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified