SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 276300 of 935 papers

TitleStatusHype
Capacity Control is an Effective Memorization Mitigation Mechanism in Text-Conditional Diffusion ModelsCode0
An efficient framework based on large foundation model for cervical cytopathology whole slide image screeningCode0
Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based AlignmentCode0
4,500 Seconds: Small Data Training Approaches for Deep UAV Audio ClassificationCode0
PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language ModelsCode0
Orchid2024: A cultivar-level dataset and methodology for fine-grained classification of Chinese Cymbidium OrchidsCode0
Efficient Stitchable Task AdaptationCode0
Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You NeedCode0
NLoRA: Nyström-Initiated Low-Rank Adaptation for Large Language ModelsCode0
Adapting Multilingual LLMs to Low-Resource Languages with Knowledge Graphs via AdaptersCode0
Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning StrategiesCode0
On-Device LLM for Context-Aware Wi-Fi RoamingCode0
Parameter-efficient Fine-tuning for improved Convolutional Baseline for Brain Tumor Segmentation in Sub-Saharan Africa Adult Glioma DatasetCode0
PEFT-U: Parameter-Efficient Fine-Tuning for User PersonalizationCode0
Leveraging Coordinate Momentum in SignSGD and Muon: Memory-Optimized Zero-OrderCode0
Black-Box Tuning of Vision-Language Models with Effective Gradient ApproximationCode0
MU-Bench: A Multitask Multimodal Benchmark for Machine UnlearningCode0
MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete DiffusionCode0
MSPLoRA: A Multi-Scale Pyramid Low-Rank Adaptation for Efficient Model Fine-TuningCode0
Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in MammographyCode0
MoLEx: Mixture of Layer Experts for Finetuning with Sparse UpcyclingCode0
MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task LearningCode0
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value DecompositionCode0
Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4Code0
Minimal Ranks, Maximum Confidence: Parameter-efficient Uncertainty Quantification for LoRACode0
Show:102550
← PrevPage 12 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified