SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 701725 of 935 papers

TitleStatusHype
HSACNet: Hierarchical Scale-Aware Consistency Regularized Semi-Supervised Change Detection0
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models0
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation0
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters0
HyperLoader: Integrating Hypernetwork-Based LoRA and Adapter Layers into Multi-Task Transformers for Sequence Labelling0
Hypernetworks for Personalizing ASR to Atypical Speech0
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks0
HyperTuning: Toward Adapting Large Language Models without Back-propagation0
IAPT: Instruction-Aware Prompt Tuning for Large Language Models0
ICL Markup: Structuring In-Context Learning using Soft-Token Tags0
iConFormer: Dynamic Parameter-Efficient Tuning with Input-Conditioned Adaptation0
Improving Domain Adaptation through Extended-Text Reading Comprehension0
Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning0
Improving LoRA in Privacy-preserving Federated Learning0
InfoSAM: Fine-Tuning the Segment Anything Model from An Information-Theoretic Perspective0
InstantFT: An FPGA-Based Runtime Subsecond Fine-tuning of CNN Models0
Investigating Automatic Scoring and Feedback using Large Language Models0
Investigating Decoder-only Large Language Models for Speech-to-text Translation0
Is Multiple Object Tracking a Matter of Specialization?0
Is your LLM trapped in a Mental Set? Investigative study on how mental sets affect the reasoning capabilities of LLMs0
iTBLS: A Dataset of Interactive Conversations Over Tabular Information0
Keep the Balance: A Parameter-Efficient Symmetrical Framework for RGB+X Semantic Segmentation0
KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning0
Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-Tuning0
L4Q: Parameter Efficient Quantization-Aware Fine-Tuning on Large Language Models0
Show:102550
← PrevPage 29 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified