SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 57015750 of 10752 papers

TitleStatusHype
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT20150
Neural Response Generation via GAN with an Approximate Embedding Layer0
Neural Semantic Parsing0
Neural sentence generation from formal semantics0
Neural Sequence-Labelling Models for Grammatical Error Correction0
Neural Sequence Learning Models for Word Sense Disambiguation0
Neural Sequence-to-sequence Learning of Internal Word Structure0
Neural Simultaneous Speech Translation Using Alignment-Based Chunking0
Neural Speech Translation at AppTek0
Neural Speech Translation: From Neural Machine Translation to Direct Speech Translation0
Neural-Symbolic Recursive Machine for Systematic Generalization0
Neural System Combination for Machine Translation0
Neural Text Generation: A Practical Guide0
Neural Text Generation with Artificial Negative Examples0
Neural Text Normalization with Subword Units0
Neural Text Simplification in Low-Resource Conditions Using Weak Supervision0
Neural Transition-based Parsing of Library Deprecations0
Neural versus Phrase-Based Machine Translation Quality: a Case Study0
Neural vs. Phrase-Based Machine Translation in a Multi-Domain Scenario0
Neural Zero-Inflated Quality Estimation Model For Automatic Speech Recognition System0
Neuron Interaction Based Representation Composition for Neural Machine Translation0
Neuron Specialization: Leveraging intrinsic task modularity for multilingual machine translation0
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models0
New Approach to translation of Isolated Units in English-Korean Machine Translation0
New Directions in Vector Space Models of Meaning0
New Language Pairs in TectoMT0
New language resources for the Pashto language0
NEWS 2018 Whitepaper0
News about the Romanian Wordnet0
News Citation Recommendation with Implicit and Explicit Semantics0
New Trends for Modern Machine Translation with Large Reasoning Models0
A Paradigm Shift: The Future of Machine Translation Lies with Large Language Models0
New Word Detection for Sentiment Analysis0
N-gram and Gazetteer List Based Named Entity Recognition for Urdu: A Scarce Resourced Language0
N-gram-based Tense Models for Statistical Machine Translation0
N-gram Counts and Language Models from the Common Crawl0
N-gram Language Models and POS Distribution for the Identification of Spanish Varieties (Ngrammes et Traits Morphosyntaxiques pour la Identification de Vari\'et\'es de l'Espagnol) [in French]0
N-gram language models for massively parallel devices0
N-Gram Nearest Neighbor Machine Translation0
ngram-OAXE: Phrase-Based Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation0
N-gram Prediction and Word Difference Representations for Language Modeling0
NHK’s Lexically-Constrained Neural Machine Translation at WAT 20210
NICE: Neural Integrated Custom Engines0
NICT-2 Translation System for WAT2016: Applying Domain Adaptation to Phrase-based Statistical Machine Translation0
NICT at WAT 20150
NICT Kyoto Submission for the WMT’20 Quality Estimation Task: Intermediate Training for Domain and Task Adaptation0
NICT-NAIST System for WMT17 Multimodal Translation Task0
NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task0
NICT Self-Training Approach to Neural Machine Translation at NMT-20180
NICT's Machine Translation Systems for the WMT19 Similar Language Translation Task0
Show:102550
← PrevPage 115 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified