SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 60016025 of 10752 papers

TitleStatusHype
The ADAPT System Description for the STAPLE 2020 English-to-Portuguese Translation Task0
The ADAPT System Description for the WMT20 News Translation Task0
The AFRL IWSLT 2018 Systems: What Worked, What Didn’t0
The AFRL IWSLT 2020 Systems: Work-From-Home Edition0
The AFRL-MITLL WMT15 System: There's More than One Way to Decode It!0
The AFRL-MITLL WMT16 News-Translation Task Systems0
The AFRL-MITLL WMT17 Systems: Old, New, Borrowed, BLEU0
The AFRL-Ohio State WMT18 Multimodal System: Combining Visual with Traditional0
The AFRL-OSU WMT17 Multimodal Translation System: An Image Processing Approach0
The AFRL WMT17 Neural Machine Translation Training Task Submission0
The AFRL WMT18 Systems: Ensembling, Continuation and Combination0
The AFRL WMT19 Systems: Old Favorites and New Tricks0
The AFRL WMT20 News Translation Systems0
THEaiTRE: Artificial Intelligence to Write a Theatre Play0
The AMARA Corpus: Building Parallel Language Resources for the Educational Domain0
The AMU System in the CoNLL-2014 Shared Task: Grammatical Error Correction by Data-Intensive and Feature-Rich Statistical Machine Translation0
The AMU-UEdin Submission to the WMT 2017 Shared Task on Automatic Post-Editing0
The aNALoGuE Challenge: Non Aligned Language GEneration0
The Anatomy of a Modular System for Media Content Analysis0
The annotation of the Central Unit in Rhetorical Structure Trees: A Key Step in Annotating Rhetorical Relations0
The Arabic Parallel Gender Corpus 2.0: Extensions and Analyses0
The ARIEL-CMU Systems for LoReHLT180
The Austrian Language Resource Portal for the Use and Provision of Language Resources in a Language Variety by Public Administration -- a Showcase for Collaboration between Public Administration and a University0
The Benefit of Pseudo-Reference Translations in Quality Estimation of MT Output0
The Best Templates Match Technique For Example Based Machine Translation0
Show:102550
← PrevPage 241 of 431Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5AdminBLEU score43.8Unverified
6Transformer (ADMIN init)BLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified