SOTAVerified

Gradient-Informed Training for Low-Resource Multilingual Speech Translation

2026-03-26Unverified0· sign in to hype

Ruiyan Sun, Satoshi Nakamura

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In low-resource multilingual speech-to-text translation, uniform architectural sharing across languages frequently introduces representation conflicts that impede convergence. This work proposes a principled methodology to automatically determine layer-specific sharing patterns by mining training gradient information. Our approach employs three distinct analysis strategies: distance-based language clustering, self/cross-task divergence metrics for capacity allocation, and joint factorization coupled with canonical correlation analysis for subspace alignment. Extensive evaluation across four language pairs (using the SeamlessM4T-Medium architecture) demonstrates persistent improvements in translation quality metrics.

Reproductions