SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 851875 of 935 papers

TitleStatusHype
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language ModelsCode0
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language ModelsCode1
Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-TuningCode1
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models0
Explicit Visual Prompting for Universal Foreground SegmentationsCode2
LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-TuningCode1
Parameter-Efficient Fine-Tuning without Introducing New LatencyCode0
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models0
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning0
QLoRA: Efficient Finetuning of Quantized LLMsCode6
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource SettingsCode0
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African LanguagesCode1
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization0
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language ExplanationsCode0
Ahead-of-Time P-Tuning0
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks0
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text Sequence-to-Sequence ModelingCode1
Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity0
A Comprehensive Analysis of Adapter EfficiencyCode1
Exploring Zero and Few-shot Techniques for Intent Classification0
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM0
Text-guided High-definition Consistency Texture Model0
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language ModelsCode1
HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation0
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based AlignmentCode0
Show:102550
← PrevPage 35 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified