SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 151200 of 935 papers

TitleStatusHype
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language ModelsCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
MasakhaNEWS: News Topic Classification for African languagesCode1
Expanding Sparse Tuning for Low Memory UsageCode1
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African LanguagesCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-TuningCode1
MapSAM: Adapting Segment Anything Model for Automated Feature Detection in Historical MapsCode1
CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-TuningCode1
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image AnalysisCode1
Embedded Prompt Tuning: Towards Enhanced Calibration of Pretrained Models for Medical ImagesCode1
Customizing Language Models with Instance-wise LoRA for Sequential RecommendationCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for MambaCode1
MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image SegmentationCode1
MeteoRA: Multiple-tasks Embedded LoRA for Large Language ModelsCode1
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion ModelsCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
FedJudge: Federated Legal Large Language ModelCode1
Empirical Study of PEFT techniques for Winter Wheat SegmentationCode1
Content-based Controls For Music Large Language ModelingCode1
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language ModelsCode1
When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical ApplicationsCode1
Advancing Parameter Efficiency in Fine-tuning via Representation EditingCode1
Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuningCode1
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design ApproachCode1
Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam TasksCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-TuningCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
AutoVP: An Automated Visual Prompting Framework and BenchmarkCode1
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual LearningCode1
A Comprehensive Analysis of Adapter EfficiencyCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
Gradient-based Parameter Selection for Efficient Fine-TuningCode1
MediViSTA: Medical Video Segmentation via Temporal Fusion SAM Adaptation for EchocardiographyCode1
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of AdaptersCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA RegionCode1
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion ModelsCode1
Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation?Code1
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and InferenceCode1
A Prompt Learning Framework for Source Code SummarizationCode1
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningCode1
TS-SAM: Fine-Tuning Segment-Anything Model for Downstream TasksCode1
Show:102550
← PrevPage 4 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified