SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 151175 of 935 papers

TitleStatusHype
Embedded Prompt Tuning: Towards Enhanced Calibration of Pretrained Models for Medical ImagesCode1
CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-TuningCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
A Comprehensive Analysis of Adapter EfficiencyCode1
FonTS: Text Rendering with Typography and Style ControlsCode1
HiFT: A Hierarchical Full Parameter Fine-Tuning StrategyCode1
GIST: Improving Parameter Efficient Fine Tuning via Knowledge InteractionCode1
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
Imaging foundation model for universal enhancement of non-ideal measurement CTCode1
Customizing Language Models with Instance-wise LoRA for Sequential RecommendationCode1
Hyperdecoders: Instance-specific decoders for multi-task NLPCode1
CVPT: Cross-Attention help Visual Prompt Tuning adapt visual taskCode1
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image AnalysisCode1
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion ModelsCode1
DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion ModelsCode1
Extending Whisper with prompt tuning to target-speaker ASRCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and InferenceCode1
Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 KilobytesCode1
A Prompt Learning Framework for Source Code SummarizationCode1
Asymmetry in Low-Rank Adapters of Foundation ModelsCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
Expanding Sparse Tuning for Low Memory UsageCode1
Show:102550
← PrevPage 7 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified