SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 51100 of 935 papers

TitleStatusHype
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT AdaptationCode2
Tracking Meets LoRA: Faster Training, Larger Model, Stronger PerformanceCode2
An Embarrassingly Simple Approach for LLM with Strong ASR CapacityCode2
Learning to Route Among Specialized Experts for Zero-Shot GeneralizationCode2
PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language Models for Medical ImagingCode2
ClassWise-SAM-Adapter: Parameter Efficient Fine-tuning Adapts Segment Anything to SAR Domain for Semantic SegmentationCode2
MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task LearningCode2
mLoRA: Fine-Tuning LoRA Adapters via Highly-Efficient Pipeline Parallelism in Multiple GPUsCode2
CoLLiE: Collaborative Training of Large Language Models in an Efficient WayCode2
FATE-LLM: A Industrial Grade Federated Learning Framework for Large Language ModelsCode2
Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction TuningCode2
Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction FollowingCode2
RS5M and GeoRSCLIP: A Large Scale Vision-Language Dataset and A Large Vision-Language Model for Remote SensingCode2
Full Parameter Fine-tuning for Large Language Models with Limited ResourcesCode2
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuningCode2
Explicit Visual Prompting for Universal Foreground SegmentationsCode2
GlyphDraw: Seamlessly Rendering Text with Intricate Spatial Structures in Text-to-Image GenerationCode2
LoRA: Low-Rank Adaptation of Large Language ModelsCode2
Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot LearningCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision TransformersCode1
LoKI: Low-damage Knowledge Implanting of Large Language ModelsCode1
Universal Reasoner: A Single, Composable Plug-and-Play Reasoner for Frozen LLMsCode1
Gated Integration of Low-Rank Adaptation for Continual Learning of Language ModelsCode1
Quaff: Quantized Parameter-Efficient Fine-Tuning under Outlier Spatial Stability HypothesisCode1
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Reasoning on a Budget: Miniaturizing DeepSeek R1 with SFT-GRPO Alignment for Instruction-Tuned LLMsCode1
Multi-Token Prediction Needs RegistersCode1
Vision Graph Prompting via Semantic Low-Rank DecompositionCode1
GAPrompt: Geometry-Aware Point Cloud Prompt for 3D Vision ModelCode1
SpectrumFM: A Foundation Model for Intelligent Spectrum ManagementCode1
DeeCLIP: A Robust and Generalizable Transformer-Based Framework for Detecting AI-Generated ImagesCode1
PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud LearningCode1
Efficient Self-Supervised Adaptation for Medical Image AnalysisCode1
MoST: Efficient Monarch Sparse Tuning for 3D Representation LearningCode1
SPMTrack: Spatio-Temporal Parameter-Efficient Fine-Tuning with Mixture of Experts for Scalable Visual TrackingCode1
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual LearningCode1
SALT: Singular Value Adaptation with Low-Rank TransformationCode1
Empowering Smaller Models: Tuning LLaMA and Gemma with Chain-of-Thought for Ukrainian Exam TasksCode1
Rethinking Few-Shot Adaptation of Vision-Language Models in Two StagesCode1
Revisiting semi-supervised learning in the era of foundation modelsCode1
State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space ModelsCode1
R-LoRA: Random Initialization of Multi-Head LoRA for Multi-Task LearningCode1
CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-TuningCode1
SSMLoRA: Enhancing Low-Rank Adaptation with State Space ModelCode1
Joint Localization and Activation Editing for Low-Resource Fine-TuningCode1
Parameter Efficient Fine-Tuning of Segment Anything ModelCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
Rethinking Addressing in Language Models via Contexualized Equivariant Positional EncodingCode1
KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language ModelsCode1
Show:102550
← PrevPage 2 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified