SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 501525 of 935 papers

TitleStatusHype
Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation ModelsCode0
Investigating Decoder-only Large Language Models for Speech-to-text Translation0
Soft Language Prompts for Language TransferCode0
Let the Expert Stick to His Last: Expert-Specialized Fine-Tuning for Sparse Architectural Large Language ModelsCode4
CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications0
FineCLIPER: Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs0
Adapting Multilingual LLMs to Low-Resource Languages with Knowledge Graphs via AdaptersCode0
HyperLoader: Integrating Hypernetwork-Based LoRA and Adapter Layers into Multi-Task Transformers for Sequence Labelling0
SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models0
FairMedFM: Fairness Benchmarking for Medical Imaging Foundation ModelsCode2
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuningCode1
Embedded Prompt Tuning: Towards Enhanced Calibration of Pretrained Models for Medical ImagesCode1
Structured Unrestricted-Rank Matrices for Parameter Efficient Fine-tuningCode0
Segment Any Text: A Universal Approach for Robust, Efficient and Adaptable Sentence SegmentationCode7
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning0
Federated Adversarial Learning for Robust Autonomous Landing Runway Detection0
MU-Bench: A Multitask Multimodal Benchmark for Machine UnlearningCode0
Unlocking the Global Synergies in Low-Rank Adapters0
Towards Infinite-Long Prefix in TransformerCode0
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates0
Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy InterpolationCode0
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained Language Model for Knowledge Editing and Fine-tuningCode0
ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts0
ShareLoRA: Parameter Efficient and Robust Large Language Model Fine-tuning via Shared Low-Rank AdaptationCode0
Promoting Data and Model Privacy in Federated Learning through Quantized LoRA0
Show:102550
← PrevPage 21 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified