SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 101150 of 935 papers

TitleStatusHype
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early PruningCode1
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuningCode1
AutoVP: An Automated Visual Prompting Framework and BenchmarkCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
Imaging foundation model for universal enhancement of non-ideal measurement CTCode1
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-TuningCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual LearningCode1
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFTCode1
Advancing Parameter Efficiency in Fine-tuning via Representation EditingCode1
DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward PropagationCode1
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuningCode1
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
Domain Generalization Using Large Pretrained Models with Mixture-of-AdaptersCode1
MapSAM: Adapting Segment Anything Model for Automated Feature Detection in Historical MapsCode1
C2A: Client-Customized Adaptation for Parameter-Efficient Federated LearningCode1
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African LanguagesCode1
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion ModelsCode1
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-TuningCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?Code1
Gradient-based Parameter Selection for Efficient Fine-TuningCode1
When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical ApplicationsCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
HiFT: A Hierarchical Full Parameter Fine-Tuning StrategyCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
Generative Parameter-Efficient Fine-TuningCode1
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated ImagesCode1
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
Joint Localization and Activation Editing for Low-Resource Fine-TuningCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
Embedded Prompt Tuning: Towards Enhanced Calibration of Pretrained Models for Medical ImagesCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting ModelsCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
Content-based Controls For Music Large Language ModelingCode1
FedJudge: Federated Legal Large Language ModelCode1
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion ModelsCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
A Comprehensive Analysis of Adapter EfficiencyCode1
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image AnalysisCode1
CVPT: Cross-Attention help Visual Prompt Tuning adapt visual taskCode1
Customizing Language Models with Instance-wise LoRA for Sequential RecommendationCode1
Show:102550
← PrevPage 3 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified