SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 51100 of 935 papers

TitleStatusHype
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
mLoRA: Fine-Tuning LoRA Adapters via Highly-Efficient Pipeline Parallelism in Multiple GPUsCode2
MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuningCode2
Memory-Space Visual Prompting for Efficient Vision-Language Fine-TuningCode2
Low-Rank Quantization-Aware Training for LLMsCode2
MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D PriorsCode2
Parameter-Efficient Fine-Tuning for Foundation ModelsCode2
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?Code2
Earth-Adapter: Bridge the Geospatial Domain Gaps with Mixture of Frequency AdaptationCode2
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization AlignmentCode2
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of ParametersCode2
Any2Point: Empowering Any-modality Large Models for Efficient 3D UnderstandingCode2
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT AdaptationCode2
Balancing LoRA Performance and Efficiency with Simple Shard SharingCode2
Learning to Route Among Specialized Experts for Zero-Shot GeneralizationCode2
LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank AdaptationCode2
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-TuningCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
I-MedSAM: Implicit Medical Image Segmentation with Segment AnythingCode1
Imaging foundation model for universal enhancement of non-ideal measurement CTCode1
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuningCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuningCode1
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFTCode1
Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?Code1
Domain Generalization Using Large Pretrained Models with Mixture-of-AdaptersCode1
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuningCode1
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-TuningCode1
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNsCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
HiFT: A Hierarchical Full Parameter Fine-Tuning StrategyCode1
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
Generative Parameter-Efficient Fine-TuningCode1
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward PropagationCode1
DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated ImagesCode1
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
Gradient-based Parameter Selection for Efficient Fine-TuningCode1
Cross-Modal Adapter for Text-Video RetrievalCode1
Aggregate, Decompose, and Fine-Tune: A Simple Yet Effective Factor-Tuning Method for Vision TransformerCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
Customizing Language Models with Instance-wise LoRA for Sequential RecommendationCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
Show:102550
← PrevPage 2 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified