SOTAVerified

Domain Generalisation of NMT: Fusing Adapters with Leave-One-Domain-Out Training

2022-05-01Findings (ACL) 2022Code Available0· sign in to hype

Thuy-Trang Vu, Shahram Khadivi, Dinh Phung, Gholamreza Haffari

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Generalising to unseen domains is under-explored and remains a challenge in neural machine translation. Inspired by recent research in parameter-efficient transfer learning from pretrained models, this paper proposes a fusion-based generalisation method that learns to combine domain-specific parameters. We propose a leave-one-domain-out training strategy to avoid information leaking to address the challenge of not knowing the test domain during training time. Empirical results on three language pairs show that our proposed fusion method outperforms other baselines up to +0.8 BLEU score on average.

Tasks

Reproductions