SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 67016750 of 10752 papers

TitleStatusHype
Why Do Neural Dialog Systems Generate Short and Meaningless Replies? A Comparison between Dialog and Translation0
Neural Machine Translation by Generating Multiple Linguistic Factors0
FBK’s Multilingual Neural Machine Translation System for IWSLT 20170
CHARCUT: Human-Targeted Character-Based MT Evaluation with Loose DifferencesCode0
Evolution Strategy Based Automatic Tuning of Neural Machine Translation Systems0
The RWTH Aachen Machine Translation Systems for IWSLT 20170
Monolingual Embeddings for Low Resourced Neural Machine TranslationCode0
Domain-independent Punctuation and Segmentation Insertion0
Toward Robust Neural Machine Translation for Noisy Input Sequences0
Kyoto University MT System Description for IWSLT 20170
Overview of the IWSLT 2017 Evaluation Campaign0
KIT’s Multilingual Neural Machine Translation systems for IWSLT 20170
Towards better translation performance on spoken language0
Three-phase training to address data sparsity in Neural Machine Translation0
Deliberation Networks: Sequence Generation Beyond One-Pass Decoding0
Decoding with Value Networks for Neural Machine Translation0
SVD-Softmax: Fast Softmax Approximation on Large Vocabulary Neural Networks0
Bingo at IJCNLP-2017 Task 4: Augmenting Data using Machine Translation for Cross-linguistic Customer Feedback Classification0
IJCNLP-2017 Task 4: Customer Feedback Analysis0
Learning from Parenthetical Sentences for Term Translation in Machine Translation0
Book Review: Syntax-Based Statistical Machine Translation by Philip Williams, Rico Sennrich, Matt Post and Philipp Koehn0
Survey: Multiword Expression Processing: A Survey0
Deep Learning Scaling is Predictable, Empirically0
Modeling Coherence for Neural Machine Translation with Dynamic and Topic Caches0
Parameters Optimization of Deep Learning Models using Particle Swarm Optimization0
Neural Text Generation: A Practical Guide0
Population Based Training of Neural NetworksCode1
Modeling Past and Future for Neural Machine TranslationCode0
Learning to Remember Translation History with a Continuous CacheCode0
Machine Translation using Semantic Web Technologies: A Survey0
Counterfactual Learning for Machine Translation: Degeneracies and Solutions0
Effective Strategies in Zero-Shot Neural Machine TranslationCode0
Using stochastic computation graphs formalism for optimization of sequence-to-sequence modelCode0
Evaluating Machine Translation Performance on Chinese Idioms with a Blacklist Method0
E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks0
Incorporating Syntactic Uncertainty in Neural Machine Translation with Forest-to-Sequence Model0
An Encoder-Decoder Framework Translating Natural Language to Database Queries0
ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations0
Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings0
Classical Structured Prediction Losses for Sequence to Sequence Learning0
Word, Subword or Character? An Empirical Study of Granularity in Chinese-English NMTCode1
Syntax-Directed Attention for Neural Machine Translation0
Document Context Neural Machine Translation with Memory Networks0
Block-Sparse Recurrent Neural Networks0
Non-Autoregressive Neural Machine TranslationCode0
Weighted Transformer Network for Machine TranslationCode0
A^4NT: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation0
Synthetic and Natural Noise Both Break Neural Machine TranslationCode0
Towards Neural Machine Translation with Partially Aligned Corpora0
Compressing Word Embeddings via Deep Compositional Code LearningCode0
Show:102550
← PrevPage 135 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified