SOTAVerified

Machine Translation

Machine translation is the task of translating a sentence in a source language to a different target language.

Approaches for machine translation can range from rule-based to statistical to neural-based. More recently, encoder-decoder attention-based architectures like BERT have attained major improvements in machine translation.

One of the most popular datasets used to benchmark machine translation systems is the WMT family of datasets. Some of the most commonly used evaluation metrics for machine translation systems include BLEU, METEOR, NIST, and others.

( Image credit: Google seq2seq )

Papers

Showing 23512400 of 10752 papers

TitleStatusHype
Integrating Translation Memories into Non-Autoregressive Machine TranslationCode0
DATScore: Evaluating Translation with Data Augmented Translations0
Investigating Massive Multilingual Pre-Trained Machine Translation Models for Clinical Domain via Transfer Learning0
Checks and Strategies for Enabling Code-Switched Machine Translation0
Streaming Punctuation for Long-form Dictation with Transformers0
Improving Robustness of Retrieval Augmented Translation via Shuffling of Suggestions0
Exploring Segmentation Approaches for Neural Machine Translation of Code-Switched Egyptian Arabic-English Text0
Improving Retrieval Augmented Neural Machine Translation by Controlling Source and Fuzzy-Match Interactions0
Automatic Evaluation and Analysis of Idioms in Neural Machine TranslationCode0
ngram-OAXE: Phrase-Based Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation0
LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language ModelsCode0
Toxicity in Multilingual Machine Translation at Scale0
Measuring Fine-Grained Semantic Equivalence with Abstract Meaning Representation0
Reinforcement Learning with Large Action Spaces for Neural Machine Translation0
Revisiting Syllables in Language Modelling and their Application on Low-Resource Machine Translation0
Neural-Symbolic Recursive Machine for Systematic Generalization0
Public Transit Arrival Prediction: a Seq2Seq RNN Approach0
The boundaries of meaning: a case study in neural machine translation0
Machine Translation Between High-resource Languages in a Language Documentation Setting0
CoDoNMT: Modeling Cohesion Devices for Document-Level Neural Machine TranslationCode0
基于图文细粒度对齐语义引导的多模态神经机器翻译方法(Based on Semantic Guidance of Fine-grained Alignment of Image-Text for Multi-modal Neural Machine Translation)0
Translating Spanish into Spanish Sign Language: Combining Rules and Data-driven Approaches0
PICT@WAT 2022: Neural Machine Translation Systems for Indic Languages0
Multiple Pivot Languages and Strategic Decoder Initialization Helps Neural Machine Translation0
Exploring Word Alignment towards an Efficient Sentence Aligner for Filipino and Cebuano Languages0
Rakuten’s Participation in WAT 2022: Parallel Dataset Filtering by Leveraging Vocabulary Heterogeneity0
Applying Natural Annotation and Curriculum Learning to Named Entity Recognition for Under-Resourced LanguagesCode0
Byte-based Multilingual NMT for Endangered LanguagesCode0
Vocabulary-informed Language Encoding0
Language Branch Gated Multilingual Neural Machine Translation0
Two Languages Are Better than One: Bilingual Enhancement for Chinese Named Entity Recognition0
Penalizing Divergence: Multi-Parallel Translation for Low-Resource Languages of North America0
The Curious Case of Logistic Regression for Italian Languages and Dialects IdentificationCode0
Linguistically-Motivated Yorùbá-English Machine Translation0
Investigation of English to Hindi Multimodal Neural Machine Translation using Transliteration-based Phrase Pairs Augmentation0
NIT Rourkela Machine Translation(MT) System Submission to WAT 2022 for MultiIndicMT: An Indic Language Multilingual Shared Task0
English to Bengali Multimodal Neural Machine Translation using Transliteration-based Phrase Pairs Augmentation0
FRMT: A Benchmark for Few-Shot Region-Aware Machine TranslationCode0
Benefiting from Language Similarity in the Multilingual MT Training: Case Study of Indonesian and Malaysian0
基于词典注入的藏汉机器翻译模型预训练方法(Dictionary Injection Based Pretraining Method for Tibetan-Chinese Machine Translation Model)0
BehanceMT: A Machine Translation Corpus for Livestreaming Video Transcripts0
ParaZh-22M: A Large-Scale Chinese Parabank via Machine Translation0
Adversarial Training on Disentangling Meaning and Language Representations for Unsupervised Quality Estimation0
Multi-level Community-awareness Graph Neural Networks for Neural Machine TranslationCode0
Towards Robust Neural Machine Translation with Iterative Scheduled Data-Switch TrainingCode0
Speeding up Transformer Decoding via an Attention Refinement NetworkCode0
Taking Actions Separately: A Bidirectionally-Adaptive Transfer Learning Method for Low-Resource Neural Machine Translation0
The Only Chance to Understand: Machine Translation of the Severely Endangered Low-resource Languages of Eurasia0
HFT: High Frequency Tokens for Low-Resource NMTCode0
TMU NMT System with Automatic Post-Editing by Multi-Source Levenshtein Transformer for the Restricted Translation Task of WAT 20220
Show:102550
← PrevPage 48 of 216Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Transformer Cycle (Rev)BLEU score35.14Unverified
2Noisy back-translationBLEU score35Unverified
3Transformer+Rep(Uni)BLEU score33.89Unverified
4T5-11BBLEU score32.1Unverified
5BiBERTBLEU score31.26Unverified
6Transformer + R-DropBLEU score30.91Unverified
7Bi-SimCutBLEU score30.78Unverified
8BERT-fused NMTBLEU score30.75Unverified
9Data Diversification - TransformerBLEU score30.7Unverified
10SimCutBLEU score30.56Unverified
#ModelMetricClaimedVerifiedStatus
1Transformer+BT (ADMIN init)BLEU score46.4Unverified
2Noisy back-translationBLEU score45.6Unverified
3mRASP+Fine-TuneBLEU score44.3Unverified
4Transformer + R-DropBLEU score43.95Unverified
5Transformer (ADMIN init)BLEU score43.8Unverified
6AdminBLEU score43.8Unverified
7BERT-fused NMTBLEU score43.78Unverified
8MUSE(Paralllel Multi-scale Attention)BLEU score43.5Unverified
9T5BLEU score43.4Unverified
10Local Joint Self-attentionBLEU score43.3Unverified
#ModelMetricClaimedVerifiedStatus
1PiNMTBLEU score40.43Unverified
2BiBERTBLEU score38.61Unverified
3Bi-SimCutBLEU score38.37Unverified
4Cutoff + Relaxed Attention + LMBLEU score37.96Unverified
5DRDABLEU score37.95Unverified
6Transformer + R-Drop + CutoffBLEU score37.9Unverified
7SimCutBLEU score37.81Unverified
8Cutoff+KneeBLEU score37.78Unverified
9CutoffBLEU score37.6Unverified
10CipherDAugBLEU score37.53Unverified
#ModelMetricClaimedVerifiedStatus
1HWTSC-Teacher-SimScore19.97Unverified
2MS-COMET-22Score19.89Unverified
3MS-COMET-QE-22Score19.76Unverified
4KG-BERTScoreScore17.28Unverified
5metricx_xl_DA_2019Score17.17Unverified
6COMET-QEScore16.8Unverified
7COMET-22Score16.31Unverified
8UniTE-srcScore15.68Unverified
9UniTE-refScore15.38Unverified
10metricx_xxl_DA_2019Score15.24Unverified