SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 726750 of 935 papers

TitleStatusHype
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM0
Language and Task Arithmetic with Parameter-Efficient Layers for Zero-Shot Summarization0
LayerNorm: A key component in parameter-efficient fine-tuning0
Learning to Route for Dynamic Adapter Composition in Continual Learning with Language Models0
LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning0
Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs0
Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model0
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma0
LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning0
LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image0
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning0
LoFT: Low-Rank Adaptation That Behaves Like Full Fine-Tuning0
LoKi: Low-dimensional KAN for Efficient Fine-tuning Image Models0
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models0
LoRA as a Flexible Framework for Securing Large Vision Systems0
LoRACode: LoRA Adapters for Code Embeddings0
LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization0
LoRA-drop: Efficient LoRA Parameter Pruning based on Output Evaluation0
LoRA Dropout as a Sparsity Regularizer for Overfitting Control0
LoRA ensembles for large language model fine-tuning0
LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement0
LoRAGuard: An Effective Black-box Watermarking Approach for LoRAs0
LoRA-Mini : Adaptation Matrices Decomposition and Selective Training0
LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation0
LORD: Low Rank Decomposition Of Monolingual Code LLMs For One-Shot Compression0
Show:102550
← PrevPage 30 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified