SOTAVerified

Transfer Learning

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Papers

Showing 101125 of 10307 papers

TitleStatusHype
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPSCode2
CommonCanvas: An Open Diffusion Model Trained with Creative-Commons ImagesCode2
Event Stream-based Visual Object Tracking: A High-Resolution Benchmark Dataset and A Novel BaselineCode2
A physics-informed and attention-based graph learning approach for regional electric vehicle charging demand predictionCode2
ExpeL: LLM Agents Are Experiential LearnersCode2
LP-MusicCaps: LLM-Based Pseudo Music CaptioningCode2
A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and FutureCode2
Global birdsong embeddings enable superior transfer learning for bioacoustic classificationCode2
Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-trainCode2
Segment Any Point Cloud Sequences by Distilling Vision Foundation ModelsCode2
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuningCode2
TIES-Merging: Resolving Interference When Merging ModelsCode2
BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical TasksCode2
Lion: Adversarial Distillation of Proprietary Large Language ModelsCode2
Pengi: An Audio Language Model for Audio TasksCode2
A Survey on Time-Series Pre-Trained ModelsCode2
VPGTrans: Transfer Visual Prompt Generator across LLMsCode2
Lightweight, Pre-trained Transformers for Remote Sensing TimeseriesCode2
Leveraging medical Twitter to build a visual–language foundation model for pathology AICode2
SF2Former: Amyotrophic Lateral Sclerosis Identification From Multi-center MRI Data Using Spatial and Frequency Fusion TransformerCode2
Offsite-Tuning: Transfer Learning without Full ModelCode2
Continual Pre-training of Language ModelsCode2
Discovery of 2D materials using Transformer Network based Generative DesignCode2
CLIP-Driven Universal Model for Organ Segmentation and Tumor DetectionCode2
Towards A Unified Conformer Structure: from ASR to ASV TaskCode2
Show:102550
← PrevPage 5 of 413Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1APCLIPAccuracy84.2Unverified
2DFA-ENTAccuracy69.2Unverified
3DFA-SAFNAccuracy69.1Unverified
4EasyTLAccuracy63.3Unverified
5MEDAAccuracy60.3Unverified
#ModelMetricClaimedVerifiedStatus
1CNN10-20% Mask PSNR3.23Unverified
#ModelMetricClaimedVerifiedStatus
1Chatterjee, Dutta et al.[1]Accuracy96.12Unverified
#ModelMetricClaimedVerifiedStatus
1Co-TuningAccuracy85.65Unverified
#ModelMetricClaimedVerifiedStatus
1Physical AccessEER5.74Unverified
#ModelMetricClaimedVerifiedStatus
1riadd.aucmediAUROC0.95Unverified