SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 58015850 of 10752 papers

TitleStatusHype
`Indicatements' that character language models learn English morpho-syntactic units and regularities0
Introspection for convolutional automatic speech recognition0
Japanese Advertising Slogan Generator using Case Frame and Word Vector0
Language-Independent Representor for Neural Machine Translation0
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis0
Latent Variable Model for Multi-modal TranslationCode0
Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy0
Modelling Pro-drop with the Rational Speech Acts Model0
Multi-source synthetic treebank creation for improved cross-lingual dependency parsingCode0
Neural sentence generation from formal semantics0
Portable, layer-wise task performance monitoring for NLP models0
Sisyphus, a Workflow Manager Designed for Machine Translation and Automatic Speech Recognition0
Stylistically User-Specific Generation0
Template-based multilingual football reports generation using Wikidata as a knowledge base0
Toward Universal Dependencies for Shipibo-Konibo0
What do RNN Language Models Learn about Filler--Gap Dependencies?0
Treat the system like a human student: Automatic naturalness evaluation of generated text without reference texts0
Towards Linear Time Neural Machine Translation with Capsule Networks0
Unsupervised Token-wise Alignment to Improve Interpretation of Encoder-Decoder Models0
Using Wikipedia Edits in Low Resource Grammatical Error CorrectionCode0
When does deep multi-task learning work for loosely related document classification tasks?0
Cross-Lingual Transfer Learning for Multilingual Task Oriented Dialog0
Hallucinations in neural machine translation0
Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent NetworksCode0
Machine Translation between Vietnamese and English: an Empirical Study0
Unsupervised Neural Machine Translation Initialized by Unsupervised Statistical Machine Translation0
Learning to Teach with Dynamic Loss Functions0
On Controllable Sparse Alternatives to Softmax0
Counting in Language with RNNs0
Parallel Attention Mechanisms in Neural Machine Translation0
Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks0
Exploiting Deep Representations for Neural Machine Translation0
Modeling Localness for Self-Attention Networks0
Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation0
The MeMAD Submission to the IWSLT 2018 Speech Translation Task0
Area AttentionCode0
Out-of-Order Decoding for Robust Neural Machine Translation0
Language Modeling at Scale0
On Zero-shot Cross-lingual Transfer of Multilingual Neural Machine Translation0
Identifying and Controlling Important Neurons in Neural Machine Translation0
Learning Robust Joint Representations for Multimodal Sentiment Analysis0
Abstractive Summarization Using Attentive Neural TechniquesCode0
Improving Multilingual Semantic Textual Similarity with Shared Sentence Encoder for Low-resource Languages0
Optimizing Segmentation Granularity for Neural Machine Translation0
Impact of Corpora Quality on Neural Machine TranslationCode0
An Analysis of Attention Mechanisms: The Case of Word Sense Disambiguation in Neural Machine Translation0
Sequence to Sequence Mixture Model for Diverse Machine Translation0
Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 20180
Multi-Source Neural Machine Translation with Data Augmentation0
(Self-Attentive) Autoencoder-based Universal Language Representation for Machine Translation0
Show:102550
← PrevPage 117 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified