SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 251275 of 935 papers

TitleStatusHype
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Expanding Sparse Tuning for Low Memory UsageCode1
Enhancing Zeroth-order Fine-tuning for Language Models with Low-rank StructuresCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
FLoRA: Low-Rank Core Space for N-dimensionCode1
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFTCode1
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical ReportCode1
MeteoRA: Multiple-tasks Embedded LoRA for Large Language ModelsCode1
LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-TuningCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Need0
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning0
Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning0
Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning0
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning0
Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter0
Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data0
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning0
GP-MoLFormer: A Foundation Model For Molecular Generation0
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A Formal Approach Towards Interpretability0
Enhancing knowledge retention for continual learning with domain-specific adapters and features gating0
Show:102550
← PrevPage 11 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified