SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 701750 of 10752 papers

TitleStatusHype
End-to-End Slot Alignment and Recognition for Cross-Lingual NLUCode1
Adversarial Subword Regularization for Robust Neural Machine TranslationCode1
Towards Reasonably-Sized Character-Level Transformer NMT by Finetuning Subword SystemsCode1
Conversational Word Embedding for Retrieval-Based Dialog SystemCode1
Lexically Constrained Neural Machine Translation with Levenshtein TransformerCode1
All Word Embeddings from One EmbeddingCode1
Lite Transformer with Long-Short Range AttentionCode1
Improving Massively Multilingual Neural Machine Translation and Zero-Shot TranslationCode1
Residual Energy-Based Models for Text GenerationCode1
Attention is Not Only a Weight: Analyzing Transformers with Vector NormsCode1
SimAlign: High Quality Word Alignments without Parallel Training Data using Static and Contextualized EmbeddingsCode1
Understanding the Difficulty of Training TransformersCode1
Non-Autoregressive Machine Translation with Latent AlignmentsCode1
Transformer based Grapheme-to-Phoneme ConversionCode1
Balancing Training for Multilingual Neural Machine TranslationCode1
BLEU might be Guilty but References are not InnocentCode1
Neural Machine Translation: Challenges, Progress and FutureCode1
Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation ProblemCode1
Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word ProblemCode1
Unsupervised Domain Clusters in Pretrained Language ModelsCode1
Aligned Cross Entropy for Non-Autoregressive Machine TranslationCode1
Editable Neural NetworksCode1
Low Resource Neural Machine Translation: A Benchmark for Five African LanguagesCode1
Variational Transformers for Diverse Response GenerationCode1
FFR V1.0: Fon-French Neural Machine TranslationCode1
Felix: Flexible Text Editing Through Tagging and InsertionCode1
PowerNorm: Rethinking Batch Normalization in TransformersCode1
Masakhane -- Machine Translation For AfricaCode1
Learning to Encode Position for Transformer with Continuous Dynamical ModelCode1
Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity LinkingCode1
Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate ScheduleCode1
Morfessor EM+Prune: Improved Subword Segmentation with Expectation Maximization and PruningCode1
Towards Automatic Face-to-Face TranslationCode1
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of TransformersCode1
Addressing Some Limitations of Transformers with Feedback MemoryCode1
Towards Making the Most of Context in Neural Machine TranslationCode1
A Survey of Deep Learning Techniques for Neural Machine TranslationCode1
Incorporating BERT into Neural Machine TranslationCode1
Neural Machine Translation with Joint RepresentationCode1
A Probabilistic Formulation of Unsupervised Text Style TransferCode1
Time-aware Large Kernel ConvolutionsCode1
Neural Machine Translation System of Indic Languages -- An Attention based ApproachCode1
AMR Similarity Metrics from PrinciplesCode1
PMIndia -- A Collection of Parallel Corpora of Languages of IndiaCode1
Multilingual Denoising Pre-training for Neural Machine TranslationCode1
A Simple Baseline to Semi-Supervised Domain Adaptation for Machine TranslationCode1
Non-Autoregressive Machine Translation with Disentangled Context TransformerCode1
Improving Transformer Optimization Through Better InitializationCode1
Improving Transformer Optimization Through Better InitializationCode1
Non-autoregressive Translation with Disentangled Context TransformerCode1
Show:102550
← PrevPage 15 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified