SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 776800 of 935 papers

TitleStatusHype
Assessing Translation capabilities of Large Language Models involving English and Indian Languages0
HeLM: Highlighted Evidence augmented Language Model for Enhanced Table-to-Text Generation0
On the Analysis of Cross-Lingual Prompt Tuning for Decoder-based Multilingual Model0
Low-Rank Adaptation for Multilingual Summarization: An Empirical Study0
SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage SegmentationCode1
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language ModelsCode0
Aggregate, Decompose, and Fine-Tune: A Simple Yet Effective Factor-Tuning Method for Vision TransformerCode1
Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse FinetuningCode0
S-LoRA: Serving Thousands of Concurrent LoRA AdaptersCode3
BioInstruct: Instruction Tuning of Large Language Models for Biomedical Natural Language Processing0
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing0
Content-based Controls For Music Large Language ModelingCode1
The Expressive Power of Low-Rank AdaptationCode1
Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning0
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models0
Improving generalization in large language models by learning prefix subspacesCode0
Contextual Refinement of Translations: Large Language Models for Sentence and Document-Level Post-Editing0
When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical ApplicationsCode1
Towards a General Framework for Continual Learning with Pre-trainingCode1
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language ModelCode0
Prototype-based HyperAdapter for Sample-Efficient Multi-task TuningCode0
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling0
Domain Generalization Using Large Pretrained Models with Mixture-of-AdaptersCode1
FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language ModelsCode2
AutoVP: An Automated Visual Prompting Framework and BenchmarkCode1
Show:102550
← PrevPage 32 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified