SOTAVerified

de-en

Papers

Showing 5182 of 82 papers

TitleStatusHype
Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training0
Embedding-Enhanced Giza++: Improving Alignment in Low- and High- Resource Scenarios Using Embedding Space GeometryCode0
OmniNet: Omnidirectional Representations from TransformersCode0
Predictive Attention Transformer: Improving Transformer with Attention Map Prediction0
The University of Edinburgh-Uppsala University’s Submission to the WMT 2020 Chat Translation Task0
Vocabulary Adaptation for Domain Adaptation in Neural Machine TranslationCode0
Pronoun-Targeted Fine-tuning for NMT with Hybrid LossesCode0
On the Sub-Layer Functionalities of Transformer Decoder0
Learn to Talk via Proactive Knowledge Transfer0
On the Importance of Local Information in Transformer Based Models0
Task-Level Curriculum Learning for Non-Autoregressive Neural Machine Translation0
Improving Autoregressive NMT with Non-Autoregressive Model0
Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation0
Character-level Transformer-based Neural Machine Translation0
Vocabulary Adaptation for Distant Domain Adaptation in Neural Machine Translation0
AR: Auto-Repair the Synthetic Data for Neural Machine Translation0
Normalization of Input-output Shared Embeddings in Text Generation Models0
Self-Adaptive Scaling for Learnable Residual Structure0
Monash University's Submissions to the WNGT 2019 Document Translation Task0
Proactive Sequence Generator via Knowledge Acquisition0
Hint-Based Training for Non-Autoregressive Machine TranslationCode0
Exploring Adequacy Errors in Neural Machine Translation with the Help of Cross-Language Aligned Word Embeddings0
Johns Hopkins University Submission for WMT News Translation Task0
The RWTH Aachen University Machine Translation Systems for WMT 20190
Findings of the WMT 2019 Biomedical Translation Shared Task: Evaluation for MEDLINE Abstracts and Biomedical Terminologies0
Hint-based Training for Non-Autoregressive Translation0
STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix FrameworkCode0
The Samsung and University of Edinburgh’s submission to IWSLT170
LIG-CRIStAL System for the WMT17 Automatic Post-Editing Task0
The University of Edinburgh’s systems submission to the MT task at IWSLT0
Fully Character-Level Neural Machine Translation without Explicit SegmentationCode0
On Using Monolingual Corpora in Neural Machine Translation0
Show:102550
← PrevPage 2 of 2Next →

No leaderboard results yet.