RTM Ensemble Learning Results at Quality Estimation Task
2020-11-01WMT (EMNLP) 2020Unverified0· sign in to hype
Ergun Biçici
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
We obtain new results using referential translation machines (RTMs) with predictions mixed and stacked to obtain a better mixture of experts prediction. We are able to achieve better results than the baseline model in Task 1 subtasks. Our stacking results significantly improve the results on the training sets but decrease the test set results. RTMs can achieve to become the 5th among 13 models in ru-en subtask and 5th in the multilingual track of sentence-level Task 1 based on MAE.