Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests
Takuya Matsuzaki, Akira Fujita, Naoya Todo, Noriko H. Arai
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
This paper reports on an experiment where 795 human participants answered to the questions taken from second language proficiency tests that were translated to their native language. The output of three machine translation systems and two different human translations were used as the test material. We classified the translation errors in the questions according to an error taxonomy and analyzed the participants' response on the basis of the type and frequency of the translation errors. Through the analysis, we identified several types of errors that deteriorated most the accuracy of the participants' answers, their confidence on the answers, and their overall evaluation of the translation quality.