SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 651700 of 935 papers

TitleStatusHype
SwitchLoRA: Switched Low-Rank Adaptation Can Learn Full-Rank Information0
Risks When Sharing LoRA Fine-Tuned Diffusion Model Weights0
Robust Federated Finetuning of Foundation Models via Alternating Minimization of LoRA0
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA0
RocketPPA: Code-Level Power, Performance, and Area Prediction via LLM and Mixture of Experts0
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization0
SPD-CFL: Stepwise Parameter Dropout for Efficient Continual Federated Learning0
SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation0
SAM-E: Leveraging Visual Foundation Model with Sequence Imitation for Embodied Manipulation0
SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction0
Scaled Prompt-Tuning for Few-Shot Natural Language Generation0
Scaling Laws for Forgetting When Fine-Tuning Large Language Models0
Scaling Up Summarization: Leveraging Large Language Models for Long Text Extractive Summarization0
SC-LoRA: Balancing Efficient Fine-tuning and Knowledge Preservation via Subspace-Constrained LoRA0
Sculpting [CLS] Features for Pre-Trained Model-Based Class-Incremental Learning0
SECURA: Sigmoid-Enhanced CUR Decomposition with Uninterrupted Retention and Low-Rank Adaptation in Large Language Models0
Selective Fine-tuning on LLM-labeled Data May Reduce Reliance on Human Annotation: A Case Study Using Schedule-of-Event Table Detection0
Self-Corrected Multimodal Large Language Model for End-to-End Robot Manipulation0
Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning0
Sensitivity-Aware Efficient Fine-Tuning via Compact Dynamic-Rank Adaptation0
Sequential Compression Layers for Efficient Federated Learning in Foundational Models0
Sequential LLM Framework for Fashion Recommendation0
Sharp Generalization Bounds for Foundation Models with Asymmetric Randomized Low-Rank Adapters0
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search0
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning0
Singular Value Fine-tuning for Few-Shot Class-Incremental Learning0
Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning0
SLIM: Let LLM Learn More and Forget Less with Soft LoRA and Identity Mixture0
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models0
SOLIDO: A Robust Watermarking Method for Speech Synthesis via Low-Rank Adaptation0
SoMA: Singular Value Decomposed Minor Components Adaptation for Domain Generalizable Representation Learning0
SPAFIT: Stratified Progressive Adaptation Fine-tuning for Pre-trained Large Language Models0
Sparsely Shared LoRA on Whisper for Child Speech Recognition0
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning0
Sparsity- and Hybridity-Inspired Visual Parameter-Efficient Fine-Tuning for Medical Diagnosis0
Speech Recognition for Automatically Assessing Afrikaans and isiXhosa Preschool Oral Narratives0
SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models0
SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning0
SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models0
SRLoRA: Subspace Recomposition in Low-Rank Adaptation via Importance-Based Fusion and Reinitialization0
Strong Baselines for Parameter Efficient Few-Shot Fine-tuning0
Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding0
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization0
SVFit: Parameter-Efficient Fine-Tuning of Large Pre-Trained Models Using Singular Values0
TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning0
TADFormer: Task-Adaptive Dynamic TransFormer for Efficient Multi-Task Learning0
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models0
TartuNLP at EvaLatin 2024: Emotion Polarity Detection0
TartuNLP @ SIGTYP 2024 Shared Task: Adapting XLM-RoBERTa for Ancient and Historical Languages0
tCURLoRA: Tensor CUR Decomposition Based Low-Rank Parameter Adaptation and Its Application in Medical Image Segmentation0
Show:102550
← PrevPage 14 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified