SOTAVerified

Personalized Collaborative Fine-Tuning for On-Device Large Language Models

2024-04-15Code Available0· sign in to hype

Nicolas Wagner, Dongyang Fan, Martin Jaggi

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We explore on-device self-supervised collaborative fine-tuning of large language models with limited local data availability. Taking inspiration from the collaborative learning community, we introduce three distinct trust-weighted gradient aggregation schemes: weight similarity-based, prediction similarity-based and validation performance-based. To minimize communication overhead, we integrate Low-Rank Adaptation (LoRA) and only exchange LoRA weight updates. Our protocols, driven by prediction and performance metrics, surpass both FedAvg and local fine-tuning methods, which is particularly evident in realistic scenarios with more diverse local data distributions. The results underscore the effectiveness of our approach in addressing heterogeneity and scarcity within local datasets.

Tasks

Reproductions