SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 826850 of 935 papers

TitleStatusHype
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuningCode1
Scaled Prompt-Tuning for Few-Shot Natural Language Generation0
Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table TransformersCode0
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuningCode1
Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction TuningCode2
FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning0
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction FollowingCode2
SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction0
SoTaNa: The Open-Source Software Development AssistantCode1
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuningCode1
LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning0
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classificationCode0
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models0
Towards Instance-adaptive Inference for Federated LearningCode1
WIKITIDE: A Wikipedia-Based Timestamped Definition Pairs Dataset0
SimTeG: A Frustratingly Simple Approach Improves Textual Graph LearningCode1
MA-FSAR: Multimodal Adaptation of CLIP for Few-Shot Action Recognition0
Towards Trustworthy and Aligned Machine Learning: A Data-centric Survey with Causality Perspectives0
DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for Medical Image AnalysisCode0
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization0
Parameter-Efficient Fine-Tuning of LLaMA for the Clinical DomainCode1
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote SensingCode2
Full Parameter Fine-tuning for Large Language Models with Limited ResourcesCode2
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuningCode2
Show:102550
← PrevPage 34 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified