SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 476500 of 935 papers

TitleStatusHype
FINE: Factorizing Knowledge for Initialization of Variable-sized Diffusion Models0
Fine-tuning vision foundation model for crack segmentation in civil infrastructures0
Fine Tuning without Catastrophic Forgetting via Selective Low Rank Adaptation0
FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications0
FinSQL: Model-Agnostic LLMs-based Text-to-SQL Framework for Financial Analysis0
FISH-Tuning: Enhancing PEFT Methods with Fisher Information0
Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape0
FLoRIST: Singular Value Thresholding for Efficient and Accurate Federated Fine-Tuning of Large Language Models0
From Text to Emoji: How PEFT-Driven Personality Manipulation Unleashes the Emoji Potential in LLMs0
From Words to Worth: Newborn Article Impact Prediction with LLM0
Functional-level Uncertainty Quantification for Calibrated Fine-tuning on LLMs0
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks0
Gated Low-rank Adaptation for personalized Code-Switching Automatic Speech Recognition on the low-spec devices0
Systematic Analysis for Pretrained Language Model Priming for Parameter-Efficient Fine-tuning0
Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning0
Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations0
Generative Modeling of Individual Behavior at Scale0
GeoLoRA: Geometric integration for parameter efficient fine-tuning0
Get Large Language Models Ready to Speak: A Late-fusion Approach for Speech Generation0
GP-MoLFormer: A Foundation Model For Molecular Generation0
GPT vs RETRO: Exploring the Intersection of Retrieval and Parameter-Efficient Fine-Tuning0
Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning0
GraphLoRA: Empowering LLMs Fine-Tuning via Graph Collaboration of MoE0
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning0
Hallucinations and Truth: A Comprehensive Accuracy Evaluation of RAG, LoRA and DoRA0
Show:102550
← PrevPage 20 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified