SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 54015450 of 10752 papers

TitleStatusHype
A Multi-Task Architecture on Relevance-based Neural Query Translation0
Generalizing Back-Translation in Neural Machine Translation0
Automatic Conditional Generation of Personalized Social Media Short Texts0
Tagged Back-Translation0
A Simple and Effective Approach to Automatic Post-Editing with Transfer LearningCode1
Character n-gram Embeddings to Improve RNN Language Models0
Lattice Transformer for Speech Translation0
Neural Arabic Question AnsweringCode0
Does BLEU Score Work for Code Migration?0
Monotonic Infinite Lookback Attention for Simultaneous Machine Translation0
Keeping Notes: Conditional Natural Language Generation with a Scratchpad MechanismCode0
A Focus on Neural Machine Translation for African LanguagesCode0
Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation0
Improving Neural Language Modeling via Adversarial TrainingCode0
Generalized Data Augmentation for Low-Resource Translation0
There is no Artificial General Intelligence0
Happy Together: Learning and Understanding Appraisal From Natural Language0
Making Asynchronous Stochastic Gradient Descent Work for Transformers0
Word-based Domain Adaptation for Neural Machine Translation0
Shared-Private Bilingual Word Embeddings for Neural Machine Translation0
From Caesar Cipher to Unsupervised Learning: A New Method for Classifier Parameter Estimation0
Syntactically Supervised Transformers for Faster Neural Machine TranslationCode0
Bridging the Gap between Training and Inference for Neural Machine Translation0
Unsupervised Pivot Translation for Distant Languages0
Robust Neural Machine Translation with Doubly Adversarial Inputs0
Efficient, Lexicon-Free OCR using Deep Learning0
Towards conceptual generalization in the embedding spaceCode0
Learning Deep Transformer Models for Machine TranslationCode0
Imitation Learning for Non-Autoregressive Neural Machine Translation0
Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron0
Exploiting Sentential Context for Neural Machine Translation0
Post-editing Productivity with Neural Machine Translation: An Empirical Assessment of Speed and Quality in the Banking and Finance Domain0
Lattice-Based Transformer Encoder for Neural Machine Translation0
Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations0
KERMIT: Generative Insertion-Based Modeling for Sequences0
Transforming Complex Sentences into a Semantic HierarchyCode1
Dynamically Composing Domain-Data Selection with Clean-Data Selection by "Co-Curricular Learning" for Neural Machine Translation0
Resolving Gendered Ambiguous Pronouns with BERTCode0
Training Neural Machine Translation To Apply Terminology ConstraintsCode0
Assessing the Ability of Self-Attention Networks to Learn Word OrderCode0
Evaluating Gender Bias in Machine TranslationCode1
Fluent Translations from Disfluent Speech in End-to-End Speech Translation0
From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots0
Masked Non-Autoregressive Image Captioning0
Domain Adaptive Inference for Neural Machine Translation0
Domain Adaptation of Neural Machine Translation by Lexicon InductionCode1
Incorporating Source-Side Phrase Structures into Neural Machine Translation0
Neural Text Simplification in Low-Resource Conditions Using Weak Supervision0
Neural Machine Translation between Myanmar (Burmese) and Rakhine (Arakanese)0
Grounded Word Sense Translation0
Show:102550
← PrevPage 109 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5AdminBLEU score43.8Unverified
6Transformer (ADMIN init)BLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified