SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 576600 of 935 papers

TitleStatusHype
Are Large Language Models State-of-the-art Quality Estimators for Machine Translation of User-generated Content?Code0
QERA: an Analytical Framework for Quantization Error Reconstruction0
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech0
LoRTA: Low Rank Tensor Adaptation of Large Language Models0
BIPEFT: Budget-Guided Iterative Search for Parameter Efficient Fine-Tuning of Large Pretrained Language Models0
Adapting Segment Anything Model to Melanoma Segmentation in Microscopy Slide Images0
Llama SLayer 8B: Shallow Layers Hold the Key to Knowledge InjectionCode0
NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models0
House of Cards: Massive Weights in LLMs0
DLP-LoRA: Efficient Task-Specific LoRA Fusion with a Dynamic, Lightweight Plugin for Large Language ModelsCode0
Embedding-based statistical inference on generative models0
SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models0
Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language ModelsCode0
Unsupervised Human Preference Learning0
Resource Allocation for Stable LLM Training in Mobile Edge Computing0
Pear: Pruning and Sharing Adapters in Visual Parameter-Efficient Fine-TuningCode0
FINE: Factorizing Knowledge for Initialization of Variable-sized Diffusion Models0
A GEN AI Framework for Medical Note Generation0
HM3: Heterogeneous Multi-Class Model Merging0
PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification0
Multi-View and Multi-Scale Alignment for Contrastive Language-Image Pre-training in MammographyCode0
Efficient In-Domain Question Answering for Resource-Constrained Environments0
Exploring Acoustic Similarity in Emotional Speech and Music via Self-Supervised Representations0
Weak-to-Strong Backdoor Attack for Large Language Models0
Parameter-efficient Bayesian Neural Networks for Uncertainty-aware Depth Estimation0
Show:102550
← PrevPage 24 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified