SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 50015050 of 10752 papers

TitleStatusHype
Lossless Data Compression with Transformer0
Linguistic Embeddings as a Common-Sense Knowledge Repository: Challenges and Opportunities0
Distilled embedding: non-linear embedding factorization using knowledge distillation0
Molecular Graph Enhanced Transformer for Retrosynthesis Prediction0
Improved Training Techniques for Online Neural Machine Translation0
Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation0
Multichannel Generative Language Models0
AdaScale SGD: A Scale-Invariant Algorithm for Distributed Training0
PNAT: Non-autoregressive Transformer by Position Learning0
Putting Machine Translation in Context with the Noisy Channel Model0
Sparse Transformer: Concentrated Attention Through Explicit Selection0
Reducing Transformer Depth on Demand with Structured DropoutCode1
Question Answering is a Format; When is it Useful?0
Breaking the Data Barrier: Towards Robust Speech Translation via Adversarial Stability Training0
Efficiently Reusing Old Models Across Languages via Transfer Learning0
In Conclusion Not Repetition: Comprehensive Abstractive Summarization With Diversified Attention Based On Determinantal Point ProcessesCode0
Cross-Lingual Natural Language Generation via Pre-TrainingCode1
Data Ordering Patterns for Neural Machine Translation: An Empirical Study0
Towards Interpreting Recurrent Neural Networks through Probabilistic AbstractionCode0
Inducing Constituency Trees through Neural Machine Translation0
Self-attention based end-to-end Hindi-English Neural Machine Translation0
Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages0
Creative GANs for generating poems, lyrics, and metaphorsCode0
Improved Variational Neural Machine Translation by Promoting Mutual Information0
Espresso: A Fast End-to-end Neural Speech Recognition ToolkitCode1
Simple, Scalable Adaptation for Neural Machine Translation0
Memory-Augmented Neural Networks for Machine TranslationCode0
Pointer-based Fusion of Bilingual Lexicons into Neural Machine TranslationCode0
Ludwig: a type-based declarative deep learning toolboxCode3
Multilingual Neural Machine Translation for Zero-Resource LanguagesCode0
A simple discriminative training method for machine translation with large-scale features0
Automatically Extracting Challenge Sets for Non local Phenomena in Neural Machine TranslationCode0
Hint-Based Training for Non-Autoregressive Machine TranslationCode0
Ouroboros: On Accelerating Training of Transformer-Based Language ModelsCode1
Beyond BLEU: Training Neural Machine Translation with Semantic SimilarityCode0
Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade0
A Universal Parent Model for Low-Resource Neural Machine Translation Transfer0
SANVis: Visual Analytics for Understanding Self-Attention Networks0
Adaptive Scheduling for Multi-Task Learning0
A Comparative Study on Transformer vs RNN in Speech ApplicationsCode0
Neural Machine Translation with 4-Bit Precision and Beyond0
VizSeq: A Visual Analysis Toolkit for Text Generation TasksCode0
Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems0
Dynamic Fusion: Attentional Language Model for Neural Machine Translation0
Getting Gender Right in Neural Machine Translation0
Combining SMT and NMT Back-Translated Data for Efficient NMT0
MULE: Multimodal Universal Language Embedding0
Neural Machine Translation with Byte-Level SubwordsCode0
Self Learning from Large Scale Code Corpus to Infer Structure of Method Invocations0
Enhancing Machine Translation with Dependency-Aware Self-AttentionCode0
Show:102550
← PrevPage 101 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5AdminBLEU score43.8Unverified
6Transformer (ADMIN init)BLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified