SOTAVerified

Simultaneous Interpretation Corpus Construction by Large Language Models in Distant Language Pair

2024-04-18Code Available0· sign in to hype

Yusuke Sakai, Mana Makinae, Hidetaka Kamigaito, Taro Watanabe

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In Simultaneous Machine Translation (SiMT) systems, training with a simultaneous interpretation (SI) corpus is an effective method for achieving high-quality yet low-latency systems. However, it is very challenging to curate such a corpus due to limitations in the abilities of annotators, and hence, existing SI corpora are limited. Therefore, we propose a method to convert existing speech translation corpora into interpretation-style data, maintaining the original word order and preserving the entire source content using Large Language Models (LLM-SI-Corpus). We demonstrate that fine-tuning SiMT models in text-to-text and speech-to-text settings with the LLM-SI-Corpus reduces latencies while maintaining the same level of quality as the models trained with offline datasets. The LLM-SI-Corpus is available at https://github.com/yusuke1997/LLM-SI-Corpus.

Tasks

Reproductions