SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 901935 of 935 papers

TitleStatusHype
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language ModelCode0
Streaming Detection of Queried Event StartCode0
AR-RAG: Autoregressive Retrieval Augmentation for Image GenerationCode0
IAP: Improving Continual Learning of Vision-Language Models via Instance-Aware PromptingCode0
Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuningCode0
How to Tune a Multilingual Encoder Model for Germanic Languages: A Study of PEFT, Full Fine-Tuning, and Language AdaptersCode0
ReasoningV: Efficient Verilog Code Generation with Adaptive Hybrid Reasoning ModelCode0
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language ModelsCode0
Refining Joint Text and Source Code Embeddings for Retrieval Task with Parameter-Efficient Fine-TuningCode0
Refining Salience-Aware Sparse Fine-Tuning Strategies for Language ModelsCode0
Re-Imagining Multimodal Instruction Tuning: A Representation ViewCode0
Hiding Images in Diffusion Models by Editing Learned Score FunctionsCode0
Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFTCode0
StyleSpeech: Parameter-efficient Fine Tuning for Pre-trained Controllable Text-to-SpeechCode0
Reprogramming Distillation for Medical Foundation ModelsCode0
CoLA: Collaborative Low-Rank AdaptationCode0
LLaVA Steering: Visual Instruction Tuning with 500x Fewer Parameters through Modality Linear Representation-SteeringCode0
AROMA: Autonomous Rank-one Matrix AdaptationCode0
LLMsAgainstHate @ NLU of Devanagari Script Languages 2025: Hate Speech Detection and Target Identification in Devanagari Languages via Parameter Efficient Fine-Tuning of LLMsCode0
Harnessing the Power of Large Language Model for Uncertainty Aware Graph ProcessingCode0
Rethinking Token Reduction with Parameter-Efficient Fine-Tuning in ViT for Pixel-Level TasksCode0
ColA: Collaborative Adaptation with Gradient LearningCode0
Leveraging Coordinate Momentum in SignSGD and Muon: Memory-Optimized Zero-OrderCode0
GuiLoMo: Allocating Expert Number and Rank for LoRA-MoE via Bilevel Optimization with GuidedSelection VectorsCode0
Adapting Multilingual LLMs to Low-Resource Languages with Knowledge Graphs via AdaptersCode0
Gradient Weight-normalized Low-rank Projection for Efficient LLM TrainingCode0
RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language ModelsCode0
Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?Code0
Gradient Inversion Attacks on Parameter-Efficient Fine-TuningCode0
Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank AdaptationCode0
VELoRA: A Low-Rank Adaptation Approach for Efficient RGB-Event based RecognitionCode0
VTD-CLIP: Video-to-Text Discretization via Prompting CLIPCode0
Coeff-Tuning: A Graph Filter Subspace View for Tuning Attention-Based Large ModelsCode0
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column UpdatesCode0
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural NetworkCode0
Show:102550
← PrevPage 19 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified