SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 61016150 of 10752 papers

TitleStatusHype
European Language Resource Coordination: Collecting Language Resources for Public Sector Multilingual Information Management0
EuroSense: Automatic Harvesting of Multilingual Sense Annotations from Parallel Text0
EvalD Reference-Less Discourse Evaluation for WMT180
Evaluating a Machine Translation System in a Technical Support Scenario0
Evaluating Amharic Machine Translation0
Evaluating and Combining Name Entity Recognition Systems0
Evaluating and explaining training strategies for zero-shot cross-lingual news sentiment analysis0
Evaluating (and Improving) Sentence Alignment under Noisy Conditions0
Evaluating and Improving the Coreference Capabilities of Machine Translation Models0
Evaluating and Optimizing the Effectiveness of Neural Machine Translation in Supporting Code Retrieval Models: A Study on the CAT Benchmark0
Evaluating Appropriateness Of System Responses In A Spoken CALL Game0
Evaluating Automatic Speech Recognition in Translation0
Evaluating Compound Splitters Extrinsically with Textual Entailment0
Evaluating Curriculum Learning Strategies in Neural Combinatorial Optimization0
Evaluating Discourse Phenomena in Neural Machine Translation0
Evaluating Domain Adaptation for Machine Translation Across Scenarios0
Evaluating EcoLexiCAT: a Terminology-Enhanced CAT Tool0
Evaluating Explanation Methods for Neural Machine Translation0
Evaluating Features for Identifying Japanese-Chinese Bilingual Synonymous Technical Terms from Patent Families0
Evaluating Gender Bias in Hindi-English Machine Translation0
Evaluating Gender Bias in the Translation of Gender-Neutral Languages into English0
Evaluating Gender Bias Transfer from Film Data0
Evaluating Grammaticality in Seq2seq Models with a Broad Coverage HPSG Grammar: A Case Study on Machine Translation0
Evaluating Improvised Hip Hop Lyrics - Challenges and Observations0
Evaluating Indirect Strategies for Chinese-Spanish Statistical Machine Translation0
Evaluating Low-Resource Machine Translation between Chinese and Vietnamese with Back-Translation0
Evaluating machine translation for assimilation via a gap-filling task0
Evaluating machine translation in a low-resource language combination: Spanish-Galician.0
Evaluating Machine Translation in a Usage Scenario0
Evaluating Machine Translation in Cross-lingual E-Commerce Search0
Evaluating Machine Translation Performance on Chinese Idioms with a Blacklist Method0
Evaluating Machine Translation Quality with Conformal Predictive Distributions0
Evaluating Machine Translation Systems with Second Language Proficiency Tests0
Evaluating MT Systems: A Theoretical Framework0
Evaluating Neural Machine Translation in English-Japanese Task0
Evaluating o1-Like LLMs: Unlocking Reasoning for Translation through Comprehensive Analysis0
Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages0
Evaluating Robustness to Input Perturbations for Neural Machine Translation0
Evaluating Syntactic Properties of Seq2seq Output with a Broad Coverage HPSG: A Case Study on Machine Translation0
Evaluating Text Style Transfer Evaluation: Are There Any Reliable Metrics?0
Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation0
Evaluating the effects of interactivity in a post-editing workbench0
Evaluating the Efficacy of Length-Controllable Machine Translation0
Evaluating the Impact of Using a Domain-specific Bilingual Lexicon on the Performance of a Hybrid Machine Translation Approach0
Evaluating the IWSLT2023 Speech Translation Tasks: Human Annotations, Automatic Metrics, and Segmentation0
Evaluating the Learning Curve of Domain Adaptive Statistical Machine Translation Systems0
Evaluating the Performance of Back-translation for Low Resource English-Marathi Language Pair: CFILT-IITBombay @ LoResMT 20210
Evaluating the Reliability and Interaction of Recursively Used Feature Classes for Terminology Extraction0
Evaluating the Translation Accuracy of a Novel Language-Independent MT Methodology0
Evaluating the usefulness of neural machine translation for the Polish translators in the European Commission0
Show:102550
← PrevPage 123 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified