SOTAVerified

Importance-based Neuron Allocation for Multilingual Neural Machine Translation

2021-07-14ACL 2021Code Available1· sign in to hype

Wanying Xie, Yang Feng, Shuhao Gu, Dong Yu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Multilingual neural machine translation with a single model has drawn much attention due to its capability to deal with multiple languages. However, the current multilingual translation paradigm often makes the model tend to preserve the general knowledge, but ignore the language-specific knowledge. Some previous works try to solve this problem by adding various kinds of language-specific modules to the model, but they suffer from the parameter explosion problem and require specialized manual design. To solve these problems, we propose to divide the model neurons into general and language-specific parts based on their importance across languages. The general part is responsible for preserving the general knowledge and participating in the translation of all the languages, while the language-specific part is responsible for preserving the language-specific knowledge and participating in the translation of some specific languages. Experimental results on several language pairs, covering IWSLT and Europarl corpus datasets, demonstrate the effectiveness and universality of the proposed method.

Tasks

Reproductions