SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 76100 of 10307 papers

TitleStatusHype
Enhancing Zero-Shot Facial Expression Recognition by LLM Knowledge TransferCode2
FlashST: A Simple and Universal Prompt-Tuning Framework for Traffic PredictionCode2
XTrack: Multimodal Training Boosts RGB-X Video Object TrackersCode2
MRSegmentator: Multi-Modality Segmentation of 40 Classes in MRI and CTCode2
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
Constructing and Exploring Intermediate Domains in Mixed Domain Semi-supervised Medical Image SegmentationCode2
DinoBloom: A Foundation Model for Generalizable Cell Embeddings in HematologyCode2
Pre-trained Vision and Language Transformers Are Few-Shot Incremental LearnersCode2
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance FieldsCode2
Quantformer: from attention to profit with a quantitative transformer trading strategyCode2
An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated LearningCode2
HistGen: Histopathology Report Generation via Local-Global Feature Encoding and Cross-modal Context InteractionCode2
AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit DetectorsCode2
Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud AnalysisCode2
CARTE: Pretraining and Transfer for Tabular LearningCode2
CLAP: Learning Transferable Binary Code Representations with Natural Language SupervisionCode2
VOLoc: Visual Place Recognition by Querying Compressed Lidar MapCode2
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network GenerationCode2
An end-to-end attention-based approach for learning on graphsCode2
PLAPT: Protein-Ligand Binding Affinity Prediction Using Pretrained TransformersCode2
Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph TransformersCode2
Graph Domain Adaptation: Challenges, Progress and ProspectsCode2
Finetuning Large Language Models for Vulnerability DetectionCode2
MMA: Multi-Modal Adapter for Vision-Language ModelsCode2
Any-point Trajectory Modeling for Policy LearningCode2
Show:102550
← PrevPage 4 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified