SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 601650 of 935 papers

TitleStatusHype
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical ReportCode1
FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition0
Efficient Remote Sensing with Harmonized Transfer Learning and Modality AlignmentCode2
Parameter-Efficient Tuning Large Language Models for Graph Representation Learning0
Parameter Efficient Fine-tuning of Self-supervised ViTs without Catastrophic ForgettingCode1
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models0
Gated Low-rank Adaptation for personalized Code-Switching Automatic Speech Recognition on the low-spec devices0
Simple, Efficient and Scalable Structure-aware Adapter Boosts Protein Language ModelsCode1
External Prompt Features Enhanced Parameter-efficient Fine-tuning for Salient Object Detection0
ColA: Collaborative Adaptation with Gradient LearningCode0
Parameter Efficient Fine Tuning: A Comprehensive Analysis Across Applications0
iTBLS: A Dataset of Interactive Conversations Over Tabular Information0
TartuNLP @ SIGTYP 2024 Shared Task: Adapting XLM-RoBERTa for Ancient and Historical Languages0
Mixed Text Recognition with Efficient Parameter Fine-Tuning and Transformer0
Skeleton: A New Framework for Accelerating Language Models via Task Neuron Localized Prompt Tuning0
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search0
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
LoRA Dropout as a Sparsity Regularizer for Overfitting Control0
Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning StrategiesCode0
FLoRA: Enhancing Vision-Language Models with Parameter-Efficient Federated LearningCode0
Any2Point: Empowering Any-modality Large Models for Efficient 3D UnderstandingCode2
PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language ModelsCode0
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models0
Using Few-Shot Learning to Classify Primary Lung Cancer and Other Malignancy with Lung Metastasis in Cytological Imaging via Endobronchial Ultrasound Procedures0
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized Smoothing0
Towards More General Video-based Deepfake Detection through Facial Feature Guided Adaptation for Foundation ModelCode1
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large Language Model0
Mixture of Low-rank Experts for Transferable AI-Generated Image DetectionCode1
Q-PEFT: Query-dependent Parameter Efficient Fine-tuning for Text Reranking with Large Language Models0
Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation0
GP-MoLFormer: A Foundation Model For Molecular Generation0
Personalized LLM Response Generation with Parameterized Memory InjectionCode0
Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data0
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFTCode1
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing Positional Bias in LLMs0
Harnessing the Power of Large Language Model for Uncertainty Aware Graph ProcessingCode0
Query-driven Relevant Paragraph Extraction from Legal Judgments0
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4Code0
InfLoRA: Interference-Free Low-Rank Adaptation for Continual LearningCode2
MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
LayerNorm: A key component in parameter-efficient fine-tuning0
Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design ApproachCode1
Is Modularity Transferable? A Case Study through the Lens of Knowledge DistillationCode0
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-TuningCode9
ALoRA: Allocating Low-Rank Adaptation for Fine-tuning Large Language Models0
A Single Linear Layer Yields Task-Adapted Low-Rank Matrices0
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey0
AdaViPro: Region-based Adaptive Visual Prompt for Large-Scale Models Adapting0
Harnessing Large Language Models for Text-Rich Sequential RecommendationCode1
Show:102550
← PrevPage 13 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified