SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 151175 of 935 papers

TitleStatusHype
FedSCA: Federated Tuning with Similarity-guided Collaborative Aggregation for Heterogeneous Medical Image Segmentation0
Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam TasksCode1
MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts0
Quantum-Enhanced LLM Efficient Fine Tuning0
A Survey on Federated Fine-tuning of Large Language ModelsCode2
Watch and Learn: Leveraging Expert Knowledge and Language for Surgical Video Understanding0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
Rethinking Few-Shot Adaptation of Vision-Language Models in Two StagesCode1
Enhancing Aviation Communication Transcription: Fine-Tuning Distil-Whisper with LoRA0
Singular Value Fine-tuning for Few-Shot Class-Incremental Learning0
Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout0
Privacy-Preserved Automated Scoring using Federated Learning for Educational ResearchCode0
Enhanced Continual Learning of Vision-Language Models with Model Fusion0
Revisiting semi-supervised learning in the era of foundation modelsCode1
Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness0
MoFE: Mixture of Frozen Experts Architecture0
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma0
Personalized Text Generation with Contrastive Activation Steering0
LoRACode: LoRA Adapters for Code Embeddings0
Personalized Federated Fine-tuning for Heterogeneous Data: An Automatic Rank Learning Approach via Two-Level LoRA0
State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space ModelsCode1
Addressing Overprescribing Challenges: Fine-Tuning Large Language Models for Medication Recommendation TasksCode0
PROPER: A Progressive Learning Framework for Personalized Large Language Models with Group-Level Adaptation0
Re-Imagining Multimodal Instruction Tuning: A Representation ViewCode0
LORENZA: Enhancing Generalization in Low-Rank Gradient LLM Training via Efficient Zeroth-Order Adaptive SAM0
Show:102550
← PrevPage 7 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified