SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 151175 of 10307 papers

TitleStatusHype
CLIP-Driven Universal Model for Organ Segmentation and Tumor DetectionCode2
CommonCanvas: An Open Diffusion Model Trained with Creative-Commons ImagesCode2
Feature Learning in Infinite-Width Neural NetworksCode2
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical TasksCode2
All-in-one foundational models learning across quantum chemical levelsCode2
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic PredictionCode2
3D UX-Net: A Large Kernel Volumetric ConvNet Modernizing Hierarchical Transformer for Medical Image SegmentationCode2
Quantformer: from attention to profit with a quantitative transformer trading strategyCode2
Actuarial Applications of Natural Language Processing Using Transformers: Case Studies for Using Text Features in an Actuarial ContextCode2
CARTE: Pretraining and Transfer for Tabular LearningCode2
AXIAL: Attention-based eXplainability for Interpretable Alzheimer's Localized Diagnosis using 2D CNNs on 3D MRI brain scansCode2
K-LITE: Learning Transferable Visual Models with External KnowledgeCode2
CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documentsCode2
Leveraging medical Twitter to build a visual–language foundation model for pathology AICode2
Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image SegmentationCode2
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPSCode2
Discovery of 2D materials using Transformer Network based Generative DesignCode2
LP-MusicCaps: LLM-Based Pseudo Music CaptioningCode2
An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated LearningCode2
How Well Do Sparse Imagenet Models Transfer?Code2
AutoKE: An automatic knowledge embedding framework for scientific machine learningCode1
AutoInit: Analytic Signal-Preserving Weight Initialization for Neural NetworksCode1
Automated Cloud Provisioning on AWS using Deep Reinforcement LearningCode1
Authorship Style Transfer with Policy OptimizationCode1
A unified scalable framework for causal sweeping strategies for Physics-Informed Neural Networks (PINNs) and their temporal decompositionsCode1
Show:102550
← PrevPage 7 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified