SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 451475 of 935 papers

TitleStatusHype
Exploring Adapter Design Tradeoffs for Low Resource Music Generation0
Exploring Parameter-Efficient Fine-Tuning to Enable Foundation Models in Federated Learning0
Exploring Zero and Few-shot Techniques for Intent Classification0
External Prompt Features Enhanced Parameter-efficient Fine-tuning for Salient Object Detection0
F^3OCUS -- Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics0
F^3OCUS - Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics0
FairLoRA: Unpacking Bias Mitigation in Vision Models with Fairness-Driven Low-Rank Adaptation0
FastEdit: Fast Text-Guided Single-Image Editing via Semantic-Aware Diffusion Fine-Tuning0
Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models0
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition0
Federated Adapter on Foundation Models: An Out-Of-Distribution Approach0
Federated Adversarial Learning for Robust Autonomous Landing Runway Detection0
Federated Fine-tuning of Large Language Models under Heterogeneous Tasks and Client Resources0
Federated Low-Rank Adaptation with Differential Privacy over Wireless Networks0
FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning0
FedMCP: Parameter-Efficient Federated Learning with Model-Contrastive Personalization0
FedP^2EFT: Federated Learning to Personalize Parameter Efficient Fine-Tuning for Multilingual LLMs0
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing0
FedPIA -- Permuting and Integrating Adapters leveraging Wasserstein Barycenters for Finetuning Foundation Models in Multi-Modal Federated Learning0
FedSCA: Federated Tuning with Similarity-guided Collaborative Aggregation for Heterogeneous Medical Image Segmentation0
FedVLMBench: Benchmarking Federated Fine-Tuning of Vision-Language Models0
FeTT: Continual Class Incremental Learning via Feature Transformation Tuning0
PETA: Parameter-Efficient Trojan Attacks0
Few-Shot Adversarial Low-Rank Fine-Tuning of Vision-Language Models0
FineCLIPER: Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs0
Show:102550
← PrevPage 19 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified