SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 201250 of 935 papers

TitleStatusHype
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and QuantizationCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
A Comprehensive Analysis of Adapter EfficiencyCode1
KInIT at SemEval-2024 Task 8: Fine-tuned LLMs for Multilingual Machine-Generated Text DetectionCode1
FedJudge: Federated Legal Large Language ModelCode1
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision ModelCode1
Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual RecognitionCode1
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence ModelingCode1
Less Could Be Better: Parameter-efficient Fine-tuning Advances Medical Vision Foundation ModelsCode1
Extending Whisper with prompt tuning to target-speaker ASRCode1
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early PruningCode1
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and InferenceCode1
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot LearnersCode1
FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in Medical Image AnalysisCode1
A Prompt Learning Framework for Source Code SummarizationCode1
AlphaLoRA: Assigning LoRA Experts Based on Layer Training QualityCode1
Multi-Token Prediction Needs RegistersCode1
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical ReportCode1
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion ModelsCode1
Adapting Pre-trained Language Models to African Languages via Multilingual Adaptive Fine-TuningCode1
TS-SAM: Fine-Tuning Segment-Anything Model for Downstream TasksCode1
Multimodal Instruction Tuning with Conditional Mixture of LoRACode1
Natural GaLore: Accelerating GaLore for memory-efficient LLM Training and Fine-tuningCode1
Parameter Efficient Fine-tuning via Explained Variance AdaptationCode1
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language ModelsCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
Efficient Self-Supervised Adaptation for Medical Image AnalysisCode1
MoST: Efficient Monarch Sparse Tuning for 3D Representation LearningCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Efficient Test Time Adapter Ensembling for Low-resource Language VarietiesCode1
Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design ApproachCode1
MTL-LoRA: Low-Rank Adaptation for Multi-Task LearningCode1
C2A: Client-Customized Adaptation for Parameter-Efficient Federated LearningCode1
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-TuningCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
KIF: Knowledge Identification and Fusion for Language Model Continual LearningCode1
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image SegmentationCode1
MediViSTA: Medical Video Segmentation via Temporal Fusion SAM Adaptation for EchocardiographyCode1
LLM-based Medical Assistant Personalization with Short- and Long-Term Memory CoordinationCode1
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-TuningCode1
Expanding Sparse Tuning for Low Memory UsageCode1
MeteoRA: Multiple-tasks Embedded LoRA for Large Language ModelsCode1
An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language ModelsCode1
Mixture of Low-rank Experts for Transferable AI-Generated Image DetectionCode1
MLAE: Masked LoRA Experts for Visual Parameter-Efficient Fine-TuningCode1
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion ModelsCode1
Show:102550
← PrevPage 5 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified