SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 5175 of 935 papers

TitleStatusHype
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT AdaptationCode2
Tracking Meets LoRA: Faster Training, Larger Model, Stronger PerformanceCode2
An Embarrassingly Simple Approach for LLM with Strong ASR CapacityCode2
Learning to Route Among Specialized Experts for Zero-Shot GeneralizationCode2
PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language Models for Medical ImagingCode2
ClassWise-SAM-Adapter: Parameter Efficient Fine-tuning Adapts Segment Anything to SAR Domain for Semantic SegmentationCode2
MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
mLoRA: Fine-Tuning LoRA Adapters via Highly-Efficient Pipeline Parallelism in Multiple GPUsCode2
CoLLiE: Collaborative Training of Large Language Models in an Efficient WayCode2
FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language ModelsCode2
Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction TuningCode2
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction FollowingCode2
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote SensingCode2
Full Parameter Fine-tuning for Large Language Models with Limited ResourcesCode2
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuningCode2
Explicit Visual Prompting for Universal Foreground SegmentationsCode2
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
LoRA: Low-Rank Adaptation of Large Language ModelsCode2
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
LoKI: Low-damage Knowledge Implanting of Large Language ModelsCode1
Universal Reasoner: A Single, Composable Plug-and-Play Reasoner for Frozen LLMsCode1
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
Quaff: Quantized Parameter-Efficient Fine-Tuning under Outlier Spatial Stability HypothesisCode1
Show:102550
← PrevPage 3 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified