SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 351400 of 935 papers

TitleStatusHype
Harnessing Generative LLMs for Enhanced Financial Event Entity Extraction Performance0
ReasoningV: Efficient Verilog Code Generation with Adaptive Hybrid Reasoning ModelCode0
PEFT A2Z: Parameter-Efficient Fine-Tuning Survey for Large Language and Vision Models0
6G WavesFM: A Foundation Model for Sensing, Communication, and Localization0
HSACNet: Hierarchical Scale-Aware Consistency Regularized Semi-Supervised Change Detection0
Parameter-Efficient Continual Fine-Tuning: A Survey0
Integrating Structural and Semantic Signals in Text-Attributed Graphs with BiGTexCode0
You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models0
A Decade of Wheat Mapping for Lebanon0
Balancing Stability and Plasticity in Pretrained Detector: A Dual-Path Framework for Incremental Object Detection0
CROSSAN: Towards Efficient and Effective Adaptation of Multiple Multimodal Foundation Models for Sequential RecommendationCode0
Enhancing knowledge retention for continual learning with domain-specific adapters and features gating0
Teaching pathology foundation models to accurately predict gene expression with parameter efficient knowledge transfer0
AROMA: Autonomous Rank-one Matrix AdaptationCode0
FISH-Tuning: Enhancing PEFT Methods with Fisher Information0
Bridging the Linguistic Divide: A Survey on Leveraging Large Language Models for Machine Translation0
CLIP-SLA: Parameter-Efficient CLIP Adaptation for Continuous Sign Language RecognitionCode0
DynMoLE: Boosting Mixture of LoRA Experts Fine-Tuning with a Hybrid Routing MechanismCode0
Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations0
MetaLoRA: Tensor-Enhanced Adaptive Low-Rank Fine-tuning0
Mixture of Routers0
Efficient Adaptation For Remote Sensing Visual Grounding0
AutoPsyC: Automatic Recognition of Psychodynamic Conflicts from Semi-structured Interviews with Large Language Models0
RocketPPA: Code-Level Power, Performance, and Area Prediction via LLM and Mixture of Experts0
MSPLoRA: A Multi-Scale Pyramid Low-Rank Adaptation for Efficient Model Fine-TuningCode0
Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning0
IAP: Improving Continual Learning of Vision-Language Models via Instance-Aware PromptingCode0
Explainable ICD Coding via Entity Linking0
QUAD: Quantization and Parameter-Efficient Tuning of LLM with Activation DecompositionCode0
Efficient Continual Adaptation of Pretrained Robotic Policy with Online Meta-Learned Adapters0
Hiding Images in Diffusion Models by Editing Learned Score FunctionsCode0
VTD-CLIP: Video-to-Text Discretization via Prompting CLIPCode0
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large ModelsCode0
Visual Variational Autoencoder Prompt Tuning0
TRACE: Time SeRies PArameter EffiCient FinE-tuning0
PE-CLIP: A Parameter-Efficient Fine-Tuning of Vision Language Models for Dynamic Facial Expression RecognitionCode0
VP-NTK: Exploring the Benefits of Visual Prompting in Differentially Private Data Synthesis0
FedSCA: Federated Tuning with Similarity-guided Collaborative Aggregation for Heterogeneous Medical Image Segmentation0
MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts0
Quantum-Enhanced LLM Efficient Fine Tuning0
Watch and Learn: Leveraging Expert Knowledge and Language for Surgical Video Understanding0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
Enhancing Aviation Communication Transcription: Fine-Tuning Distil-Whisper with LoRA0
Efficient Federated Fine-Tuning of Large Language Models with Layer Dropout0
Singular Value Fine-tuning for Few-Shot Class-Incremental Learning0
Enhanced Continual Learning of Vision-Language Models with Model Fusion0
Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness0
Privacy-Preserved Automated Scoring using Federated Learning for Educational ResearchCode0
MoFE: Mixture of Frozen Experts Architecture0
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma0
Show:102550
← PrevPage 8 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified