SOTAVerified

parameter-efficient fine-tuning

Parameter-Efficient Fine-Tuning (PEFT) is a technique used to adapt pre-trained models to new tasks with minimal changes to the model's parameters. This approach is particularly useful in scenarios where computational resources are limited or when it is desirable to maintain the original model's performance on the initial task.

Papers

Showing 251300 of 935 papers

TitleStatusHype
ABBA: Highly Expressive Hadamard Product Adaptation for Large Language ModelsCode1
Expanding Sparse Tuning for Low Memory UsageCode1
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMsCode1
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental LearningCode1
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot FillerCode1
LoFiT: Localized Fine-tuning on LLM RepresentationsCode1
MapSAM: Adapting Segment Anything Model for Automated Feature Detection in Historical MapsCode1
PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularizationCode1
FineDiffusion: Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 ClassesCode1
Imaging foundation model for universal enhancement of non-ideal measurement CTCode1
Exploring Foundation Models Fine-Tuning for Cytology ClassificationCode1
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language ModelsCode1
Exact and Efficient Unlearning for Large Language Model-based Recommendation0
Choice of PEFT Technique in Continual Learning: Prompt Tuning is Not All You Need0
Enhancing the efficiency of protein language models with minimal wet-lab data through few-shot learning0
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning0
HyperLoader: Integrating Hypernetwork-Based LoRA and Adapter Layers into Multi-Task Transformers for Sequence Labelling0
Enhancing News Summarization with ELearnFit through Efficient In-Context Learning and Efficient Fine-Tuning0
Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning0
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning0
Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter0
Enhancing Low-Resource LLMs Classification with PEFT and Synthetic Data0
Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning0
Hypernetworks for Personalizing ASR to Atypical Speech0
Enhancing Large Language Model Efficiencyvia Symbolic Compression: A Formal Approach Towards Interpretability0
Enhancing knowledge retention for continual learning with domain-specific adapters and features gating0
Chain of History: Learning and Forecasting with LLMs for Temporal Knowledge Graph Completion0
Enhancing Aviation Communication Transcription: Fine-Tuning Distil-Whisper with LoRA0
Enhanced Continual Learning of Vision-Language Models with Model Fusion0
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized Smoothing0
An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model0
Enfoque Odychess: Un método dialéctico, constructivista y adaptativo para la enseñanza del ajedrez con inteligencias artificiales generativas0
Enabling Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines0
CatMemo at the FinLLM Challenge Task: Fine-Tuning Large Language Models using Data Fusion in Financial Applications0
CASA: Class-Agnostic Shared Attributes in Vision-Language Models for Efficient Incremental Object Detection0
6G WavesFM: A Foundation Model for Sensing, Communication, and Localization0
HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks0
HUT: A More Computation Efficient Fine-Tuning Method With Hadamard Updated Transformation0
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques for LLMs0
Embedding-based statistical inference on generative models0
ELiTe: Efficient Image-to-LiDAR Knowledge Transfer for Semantic Segmentation0
ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models0
EF-LLM: Energy Forecasting LLM with AI-assisted Automation, Enhanced Sparse Prediction, Hallucination Detection0
Bridging the Linguistic Divide: A Survey on Leveraging Large Language Models for Machine Translation0
Efficient Telecom Specific LLM: TSLAM-Mini with QLoRA and Digital Twin Data0
BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation0
Adapting Segment Anything Model to Melanoma Segmentation in Microscopy Slide Images0
Hyper Compressed Fine-Tuning of Large Foundation Models with Quantum Inspired Adapters0
A Multi-Encoder Frozen-Decoder Approach for Fine-Tuning Large Language Models0
MoDE: Effective Multi-task Parameter Efficient Fine-Tuning with a Mixture of Dyadic Experts0
Show:102550
← PrevPage 6 of 19Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )82.63Unverified
2LLaMA2-7bAccuracy (% )82.63Unverified
3LLaMA2-7bAccuracy (% )81.93Unverified
4LLaMA2-7bAccuracy (% )80.28Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )76.68Unverified
2LLaMA2-7bAccuracy (% )76.67Unverified
3LLaMA2-7bAccuracy (% )76.27Unverified
#ModelMetricClaimedVerifiedStatus
1LLaMA2-7bAccuracy (% )70.8Unverified
2LLaMA2-7bAccuracy (% )70.09Unverified
3LLaMA2-7bAccuracy (% )69.85Unverified