SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 426450 of 935 papers

TitleStatusHype
MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts0
Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model0
Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models0
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy0
Lifelong Learning with Task-Specific Adaptation: Addressing the Stability-Plasticity Dilemma0
Advancing Enterprise Spatio-Temporal Forecasting Applications: Data Mining Meets Instruction Tuning of Language Models For Multi-modal Time Series Analysis in Low-Resource Settings0
MAP: Revisiting Weight Decomposition for Low-Rank Adaptation0
Harnessing Generative LLMs for Enhanced Financial Event Entity Extraction Performance0
LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning0
HARIS: Human-Like Attention for Reference Image Segmentation0
LLMI3D: Empowering LLM with 3D Perception from a Single 2D Image0
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient Fine-Tuning0
Hallucinations and Truth: A Comprehensive Accuracy Evaluation of RAG, LoRA and DoRA0
LoFT: Low-Rank Adaptation That Behaves Like Full Fine-Tuning0
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs0
Mamba State-Space Models Are Lyapunov-Stable Learners0
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models0
Efficient Federated Class-Incremental Learning of Pre-Trained Models via Task-agnostic Low-rank Residual Adaptation0
Adapters Mixup: Mixing Parameter-Efficient Adapters to Enhance the Adversarial Robustness of Fine-tuned Pre-trained Text Classifiers0
LoRA as a Flexible Framework for Securing Large Vision Systems0
LoRACode: LoRA Adapters for Code Embeddings0
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning0
GraphLoRA: Empowering LLMs Fine-Tuning via Graph Collaboration of MoE0
Decentralized Low-Rank Fine-Tuning of Large Language Models0
Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning0
Show:102550
← PrevPage 18 of 38Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified