SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 201225 of 935 papers

TitleStatusHype
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision ModelCode1
Gradient-based Parameter Selection for Efficient Fine-TuningCode1
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
Parameter Efficient Fine-Tuning of Segment Anything ModelCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
AdapterGNN: Parameter-Efficient Fine-Tuning Improves Generalization in GNNsCode1
ST-Adapter: Parameter-Efficient Image-to-Video Transfer LearningCode1
Ferret: Federated Full-Parameter Tuning at Scale for Large Language ModelsCode1
Parameter Efficient Multi-task Model Fusion with Partial LinearizationCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud LearningCode1
Positional Prompt Tuning for Efficient 3D Representation LearningCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-modelsCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
Efficient Localized Adaptation of Neural Weather Forecasting: A Case Study in the MENA RegionCode1
Random Masking Finds Winning Tickets for Parameter Efficient Fine-tuningCode1
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion ModelsCode1
RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program RepairCode1
Rethinking Addressing in Language Models via Contexualized Equivariant Positional EncodingCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of AdaptersCode1
Show:102550
← PrevPage 9 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified