SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 726750 of 935 papers

TitleStatusHype
Selective Fine-tuning on LLM-labeled Data May Reduce Reliance on Human Annotation: A Case Study Using Schedule-of-Event Table Detection0
Parameter-Efficient Fine-Tuning With Adapters0
CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization0
Refining Joint Text and Source Code Embeddings for Retrieval Task with Parameter-Efficient Fine-TuningCode0
ELiTe: Efficient Image-to-LiDAR Knowledge Transfer for Semantic Segmentation0
Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning0
TartuNLP at EvaLatin 2024: Emotion Polarity Detection0
Investigating Automatic Scoring and Feedback using Large Language Models0
MoPEFT: A Mixture-of-PEFTs for the Segment Anything Model0
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization0
SPAFIT: Stratified Progressive Adaptation Fine-tuning for Pre-trained Large Language Models0
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition0
Parameter-Efficient Tuning Large Language Models for Graph Representation Learning0
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models0
Gated Low-rank Adaptation for personalized Code-Switching Automatic Speech Recognition on the low-spec devices0
External Prompt Features Enhanced Parameter-efficient Fine-tuning for Salient Object Detection0
ColA: Collaborative Adaptation with Gradient LearningCode0
Parameter Efficient Fine Tuning: A Comprehensive Analysis Across Applications0
iTBLS: A Dataset of Interactive Conversations Over Tabular Information0
Mixed Text Recognition with Efficient Parameter Fine-Tuning and Transformer0
TartuNLP @ SIGTYP 2024 Shared Task: Adapting XLM-RoBERTa for Ancient and Historical Languages0
Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning0
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search0
LoRA Dropout as a Sparsity Regularizer for Overfitting Control0
Show:102550
← PrevPage 30 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified