SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 201250 of 935 papers

TitleStatusHype
Ferret: Federated Full-Parameter Tuning at Scale for Large Language ModelsCode1
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence ModelingCode1
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuningCode1
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
A Prompt Learning Framework for Source Code SummarizationCode1
AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNsCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
Positional Prompt Tuning for Efficient 3D Representation LearningCode1
Expanding Sparse Tuning for Low Memory UsageCode1
LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-TuningCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
TS-SAM: Fine-Tuning Segment-Anything Model for Downstream TasksCode1
Quaff: Quantized Parameter-Efficient Fine-Tuning under Outlier Spatial Stability HypothesisCode1
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-modelsCode1
RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program RepairCode1
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
Rethinking Few-Shot Adaptation of Vision-Language Models in Two StagesCode1
R-LoRA: Random Initialization of Multi-Head LoRA for Multi-Task LearningCode1
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion ModelsCode1
HiFT: A Hierarchical Full Parameter Fine-Tuning StrategyCode1
Imaging foundation model for universal enhancement of non-ideal measurement CTCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
Generative Parameter-Efficient Fine-TuningCode1
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of AdaptersCode1
Scaling Sparse Fine-Tuning to Large Language ModelsCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
Efficient Self-Supervised Adaptation for Medical Image AnalysisCode1
Gradient-based Parameter Selection for Efficient Fine-TuningCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision ModelCode1
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
C2A: Client-Customized Adaptation for Parameter-Efficient Federated LearningCode1
SoTaNa: The Open-Source Software Development AssistantCode1
Sparse Matrix in Large Language Model Fine-tuningCode1
Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and InferenceCode1
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Density Adaptive Attention is All You Need: Robust Parameter-Efficient Fine-Tuning Across Multiple ModalitiesCode1
Forecast-PEFT: Parameter-Efficient Fine-Tuning for Pre-trained Motion Forecasting ModelsCode1
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language ModelsCode1
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language ModelsCode1
SVFT: Parameter-Efficient Fine-Tuning with Singular VectorsCode1
KIF: Knowledge Identification and Fusion for Language Model Continual LearningCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
Towards Efficient Visual-Language Alignment of the Q-Former for Visual Reasoning TasksCode1
I-MedSAM: Implicit Medical Image Segmentation with Segment AnythingCode1
Show:102550
← PrevPage 5 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified