SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 1010110150 of 10752 papers

TitleStatusHype
The LAIX Systems in the BEA-2019 GEC Shared Task0
Understanding the Performance of Statistical MT Systems: A Linear Regression Framework0
The Kyoto University Cross-Lingual Pronoun Translation System0
Undirected Machine Translation with Discriminative Reinforcement Learning0
UNED: Improving Text Similarity Measures without Human Assessments0
Une plate-forme g\'en\'erique et ouverte pour le traitement des expressions polylexicales (An Open and Generic Framework for the Acquisition of Multiword Expressions) [in French]0
Unfolding and Shrinking Neural Machine Translation Ensembles0
UNIBA-CORE: Combining Strategies for Semantic Textual Similarity0
Word Sense Disambiguation in Hindi Language Using Hyperspace Analogue to Language and Fuzzy C-Means Clustering0
UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost0
Unified Embedding for Universal Multilingual Neural Machine Translation0
Unified Expectation Maximization0
Unified Guidelines and Resources for Arabic Dialect Orthography0
Unified Model Learning for Various Neural Machine Translation0
Unified NMT models for the Indian subcontinent transcending script-barriers0
Unified Segment-to-Segment Framework for Simultaneous Sequence Generation0
Word Sense Disambiguation using Diffusion Kernel PCA0
Unifying Bayesian Inference and Vector Space Models for Improved Decipherment0
Unifying Input and Output Smoothing in Neural Machine Translation0
Word Sense Disambiguation via PropStore and OntoNotes for Event Mention Detection0
UniMelb at SemEval-2016 Task 3: Identifying Similar Questions by combining a CNN with String Similarity Measures0
UniMelb\_NLP-CORE: Integrating predictions from multiple domains and feature sets for estimating semantic textual similarity0
Word Sense Induction for Machine Translation0
The KIT-LIMSI Translation System for WMT 20150
Unity in Diversity: A Unified Parsing Strategy for Major Indian Languages0
Universal Conceptual Cognitive Annotation (UCCA)0
Universal Conditional Masked Language Pre-training for Neural Machine Translation0
Universal Dependencies-based syntactic features in detecting human translation varieties0
Universal Dependencies for Amharic0
Universal Dependencies for Arabic0
Universal Multimodal Representation for Language Understanding0
Universal Neural Machine Translation for Extremely Low Resource Languages0
The KIT-LIMSI Translation System for WMT 20140
Universal Reordering via Linguistic Typology0
Word Shape Matters: Robust Machine Translation with Visual Embedding0
Word Similarity Datasets for Indian Languages: Annotation and Baseline Systems0
Universal Vector Neural Machine Translation With Effective Attention0
University Entrance Examinations as a Benchmark Resource for NLP-based Problem Solving0
University of Cape Town's WMT22 System: Multilingual Machine Translation for Southern African Languages0
The ILMT-s2s Corpus ― A Multimodal Interlingual Map Task Corpus0
University of Rochester WMT 2017 NMT System Submission0
University of Tsukuba's Machine Translation System for IWSLT20 Open Domain Translation Task0
UnixMan Corpus: A Resource for Language Learning in the Unix Domain0
The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 20170
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining0
Unleashing the Power of Neural Discourse Parsers - A Context and Structure Aware Approach Using Large Scale Pretraining0
UNL Explorer0
Word's Vector Representations meet Machine Translation0
UNL-ization of Punjabi with IAN0
Unlocking Layer-wise Relevance Propagation for Autoencoders0
Show:102550
← PrevPage 203 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified