SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 751800 of 935 papers

TitleStatusHype
Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning StrategiesCode0
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated LearningCode0
PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language ModelsCode0
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models0
Using Few-Shot Learning to Classify Primary Lung Cancer and Other Malignancy with Lung Metastasis in Cytological Imaging via Endobronchial Ultrasound Procedures0
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model0
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized Smoothing0
Q-PEFT: Query-dependent Parameter Efficient Fine-tuning for Text Reranking with Large Language Models0
Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation0
Personalized LLM Response Generation with Parameterized Memory InjectionCode0
GP-MoLFormer: A Foundation Model For Molecular Generation0
Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data0
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs0
Harnessing the Power of Large Language Model for Uncertainty Aware Graph ProcessingCode0
Query-driven Relevant Paragraph Extraction from Legal Judgments0
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4Code0
LayerNorm: A key component in parameter-efficient fine-tuning0
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models0
A Single Linear Layer Yields Task-Adapted Low-Rank Matrices0
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey0
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models0
AdaViPro: Region-based Adaptive Visual Prompt for Large-Scale Models Adapting0
AdaFish: Fast low-rank parameter-efficient fine-tuning by using second-order information0
Improving LoRA in Privacy-preserving Federated Learning0
FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications0
Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model0
Empirical Studies of Parameter Efficient Methods for Large Language Models of Code and Knowledge Transfer to RCode0
Introducing Routing Functions to Vision-Language Parameter-Efficient Fine-Tuning with Low-Rank BottlenecksCode0
An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model0
Targeted Efficient Fine-tuning: Optimizing Parameter Updates with Data-Driven Sample Selection0
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning0
Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation0
RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language ModelsCode0
STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language ModelsCode0
ResLoRA: Identity Residual Mapping in Low-Rank Adaption0
Inducing Generalization across Languages and Tasks using Featurized Low-Rank Mixtures0
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models0
A Fine-tuning Enhanced RAG System with Quantized Influence Measure as AI Judge0
MIP: CLIP-based Image Reconstruction from PEFT Gradients0
PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization0
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning0
Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?0
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning0
Two-stage Cytopathological Image Synthesis for Augmenting Cervical Abnormality Screening0
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models0
NOTE: Notable generation Of patient Text summaries through Efficient approach based on direct preference optimization0
Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning0
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic ForgettingCode0
LoRA Training in the NTK Regime has No Spurious Local MinimaCode0
Show:102550
← PrevPage 16 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified