SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 601650 of 935 papers

TitleStatusHype
DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation0
Dual Decomposition of Weights and Singular Value Low Rank Adaptation0
Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Models0
EEG-Based Mental Imagery Task Adaptation via Ensemble of Weight-Decomposed Low-Rank Adapters0
Efficiency at Scale: Investigating the Performance of Diminutive Language Models in Clinical Tasks0
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models0
Efficient Adaptation For Remote Sensing Visual Grounding0
Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation0
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy0
Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation0
Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models0
Efficient Continual Adaptation of Pretrained Robotic Policy with Online Meta-Learned Adapters0
Efficient Deployment of Large Language Models on Resource-constrained Devices0
Efficient Differentially Private Fine-Tuning of Diffusion Models0
Efficient Federated Class-Incremental Learning of Pre-Trained Models via Task-agnostic Low-rank Residual Adaptation0
Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout0
Efficient Federated Split Learning for Large Language Models over Communication Networks0
Efficient In-Domain Question Answering for Resource-Constrained Environments0
Efficient Telecom Specific LLM: TSLAM-Mini with QLoRA and Digital Twin Data0
EF-LLM: Energy Forecasting LLM with AI-assisted Automation, Enhanced Sparse Prediction, Hallucination Detection0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
ELiTe: Efficient Image-to-LiDAR Knowledge Transfer for Semantic Segmentation0
Embedding-based statistical inference on generative models0
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs0
Enabling Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines0
Enfoque Odychess: Un método dialéctico, constructivista y adaptativo para la enseñanza del ajedrez con inteligencias artificiales generativas0
Enhanced Continual Learning of Vision-Language Models with Model Fusion0
Enhancing Aviation Communication Transcription: Fine-Tuning Distil-Whisper with LoRA0
Enhancing knowledge retention for continual learning with domain-specific adapters and features gating0
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A Formal Approach Towards Interpretability0
Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data0
Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter0
Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning0
Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning0
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
Explainable ICD Coding via Entity Linking0
Explain Less, Understand More: Jargon Detection via Personalized Parameter-Efficient Fine-tuning0
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts0
Exploring Adapter Design Tradeoffs for Low Resource Music Generation0
Exploring Parameter-Efficient Fine-Tuning to Enable Foundation Models in Federated Learning0
Exploring Zero and Few-shot Techniques for Intent Classification0
External Prompt Features Enhanced Parameter-efficient Fine-tuning for Salient Object Detection0
F^3OCUS -- Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics0
F^3OCUS - Federated Finetuning of Vision-Language Foundation Models with Optimal Client Layer Updating Strategy via Multi-objective Meta-Heuristics0
FairLoRA: Unpacking Bias Mitigation in Vision Models with Fairness-Driven Low-Rank Adaptation0
FastEdit: Fast Text-Guided Single-Image Editing via Semantic-Aware Diffusion Fine-Tuning0
Fast-NTK: Parameter-Efficient Unlearning for Large-Scale Models0
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition0
Federated Adapter on Foundation Models: An Out-Of-Distribution Approach0
Show:102550
← PrevPage 13 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified