SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 851900 of 935 papers

TitleStatusHype
On the Analysis of Cross-Lingual Prompt Tuning for Decoder-based Multilingual Model0
Low-Rank Adaptation for Multilingual Summarization: An Empirical Study0
PEMA: An Offsite-Tunable Plug-in External Memory Adaptation for Language ModelsCode0
Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse FinetuningCode0
BioInstruct: Instruction Tuning of Large Language Models for Biomedical Natural Language Processing0
FedPEAT: Convergence of Federated Learning, Parameter-Efficient Fine Tuning, and Emulator Assisted Tuning for Artificial Intelligence Foundation Models with Mobile Edge Computing0
Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning0
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models0
Improving generalization in large language models by learning prefix subspacesCode0
Contextual Refinement of Translations: Large Language Models for Sentence and Document-Level Post-Editing0
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language ModelCode0
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning for Versatile Multimodal Modeling0
Prototype-based HyperAdapter for Sample-Efficient Multi-task TuningCode0
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources0
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models0
Parameterizing Context: Unleashing the Power of Parameter-Efficient Fine-Tuning and In-Context Tuning for Continual Table Semantic Parsing0
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning0
Conversational Factor Information Retrieval Model (ConFIRM)Code0
PETA: Parameter-Efficient Trojan Attacks0
LoRA ensembles for large language model fine-tuning0
Pushing Large Language Models to the 6G Edge: Vision, Challenges, and Opportunities0
LORD: Low Rank Decomposition Of Monolingual Code LLMs For One-Shot Compression0
PEFTT: Parameter-Efficient Fine-Tuning for low-resource Tibetan pre-trained language models0
Sparsely Shared LoRA on Whisper for Child Speech Recognition0
Test-Time Training for Speech0
Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter0
Scaled Prompt-Tuning for Few-Shot Natural Language Generation0
Exploring the Benefits of Differentially Private Pre-training and Parameter-Efficient Fine-tuning for Table TransformersCode0
FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large Language Models in Federated Learning0
SAM-PARSER: Fine-tuning SAM Efficiently by Parameter Space Reconstruction0
LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning0
Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classificationCode0
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models0
WIKITIDE: A Wikipedia-Based Timestamped Definition Pairs Dataset0
MA-FSAR: Multimodal Adaptation of CLIP for Few-Shot Action Recognition0
Towards Trustworthy and Aligned Machine Learning: A Data-centric Survey with Causality Perspectives0
DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for Medical Image AnalysisCode0
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization0
Prompt to be Consistent is Better than Self-Consistent? Few-Shot and Zero-Shot Fact Verification with Pre-trained Language ModelsCode0
PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language Models0
Parameter-Efficient Fine-Tuning without Introducing New LatencyCode0
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models0
Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning0
Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization0
Parameter-Efficient Language Model Tuning with Active Learning in Low-Resource SettingsCode0
SPARSEFIT: Few-shot Prompting with Sparse Fine-tuning for Jointly Generating Predictions and Natural Language ExplanationsCode0
Ahead-of-Time P-Tuning0
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks0
Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity0
Exploring Zero and Few-shot Techniques for Intent Classification0
Show:102550
← PrevPage 18 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified