SOTAVerified

Findings of the WMT 2018 Shared Task on Automatic Post-Editing

2018-10-01WS 2018Unverified0· sign in to hype

Rajen Chatterjee, Matteo Negri, Raphael Rubino, Marco Turchi

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We present the results from the fourth round of the WMT shared task on MT Automatic Post-Editing. The task consists in automatically correcting the output of a ``black-box'' machine translation system by learning from human corrections. Keeping the same general evaluation setting of the three previous rounds, this year we focused on one language pair (English-German) and on domain-specific data (Information Technology), with MT outputs produced by two different paradigms: phrase-based (PBSMT) and neural (NMT). Five teams submitted respectively 11 runs for the PBSMT subtask and 10 runs for the NMT subtask. In the former subtask, characterized by original translations of lower quality, top results achieved impressive improvements, up to -6.24 TER and +9.53 BLEU points over the baseline ``do-nothing'' system. The NMT subtask proved to be more challenging due to the higher quality of the original translations and the availability of less training data. In this case, top results show smaller improvements up to -0.38 TER and +0.8 BLEU points.

Tasks

Reproductions