SOTAVerified

Enhancing Multi-modal Models with Heterogeneous MoE Adapters for Fine-tuning

2025-03-26Unverified0· sign in to hype

Sashuai Zhou, Hai Huang, Yan Xia

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Multi-modal models excel in cross-modal tasks but are computationally expensive due to their billions of parameters. Parameter-efficient fine-tuning (PEFT) offers a solution by adding small trainable components while freezing pre-trained parameters. However, existing methods primarily focus on uni-modal processing, overlooking the critical modal fusion needed for multi-modal tasks. To fill this gap, we propose heterogeneous mixture of experts adapters that extend the traditional PEFT framework to support multi-modal expert combinations and improve information interaction. Additionally, our approach modifies the affine linear expert design to enable efficient modal fusion in a low-rank space, achieving competitive performance with only 5-8\% of the parameters fine-tuned. Experiments across eight downstream tasks, including visual-audio and text-visual, demonstrate the superior performance of the approach.

Tasks

Reproductions