SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 91519175 of 10752 papers

TitleStatusHype
Comparing and combining tagging with different decoding algorithms for back-translation in NMT: learnings from a low resource scenario0
Comparing BERT-based Reward Functions for Deep Reinforcement Learning in Machine Translation0
Comparing Classifier use in Chinese and Japanese0
Comparing CRF and template-matching in phrasing tasks within a Hybrid MT system0
Comparing Different Criteria for Vietnamese Word Segmentation0
Comparing Formulaic Language in Human and Machine Translation: Insight from a Parliamentary Corpus0
Comparing human perceptions of post-editing effort with post-editing operations0
Comparing Machine Translation and Human Translation: A Case Study0
Comparing MT Approaches for Text Normalization0
Comparing Multilingual Comparable Articles Based On Opinions0
Comparing Multilingual NMT Models and Pivoting0
Comparing Pipelined and Integrated Approaches to Dialectal Arabic Neural Machine Translation0
Comparing Recurrent and Convolutional Architectures for English-Hindi Neural Machine Translation0
Comparing Representations of Semantic Roles for String-To-Tree Decoding0
Comparing Rule-based and SMT-based Spelling Normalisation for English Historical Texts0
Comparing the Quality of Focused Crawlers and of the Translation Resources Obtained from them0
Comparing Translator Acceptability of TM and SMT Outputs0
Comparing two acquisition systems for automatically building an English---Croatian parallel corpus from multilingual websites0
Comparing Unsupervised Word Translation Methods Step by Step0
Comparison and Adaptation of Automatic Evaluation Metrics for Quality Assessment of Re-Speaking0
Comparison between NMT and PBSMT Performance for Translating Noisy User-Generated Content0
Comparison of Coreference Resolvers for Deep Syntax Translation0
Comparison of Deep Learning and the Classical Machine Learning Algorithm for the Malware Detection0
Comparison of Grapheme-to-Phoneme Conversion Methods on a Myanmar Pronunciation Dictionary0
Comparison of scheduling methods for the learning rate of neural network language models (Mod\`eles de langue neuronaux: une comparaison de plusieurs strat\'egies d'apprentissage) [in French]0
Show:102550
← PrevPage 367 of 431Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5AdminBLEU score43.8Unverified
6Transformer (ADMIN init)BLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified