| Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training | Jun 22, 2021 | de-enLanguage Modelling | —Unverified | 0 |
| Embedding-Enhanced Giza++: Improving Alignment in Low- and High- Resource Scenarios Using Embedding Space Geometry | Apr 18, 2021 | de-enMachine Translation | CodeCode Available | 0 |
| OmniNet: Omnidirectional Representations from Transformers | Mar 1, 2021 | de-enFew-Shot Learning | CodeCode Available | 0 |
| Predictive Attention Transformer: Improving Transformer with Attention Map Prediction | Jan 1, 2021 | de-enMachine Translation | —Unverified | 0 |
| The University of Edinburgh-Uppsala University’s Submission to the WMT 2020 Chat Translation Task | Nov 1, 2020 | de-enMachine Translation | —Unverified | 0 |
| Vocabulary Adaptation for Domain Adaptation in Neural Machine Translation | Nov 1, 2020 | de-enDomain Adaptation | CodeCode Available | 0 |
| Pronoun-Targeted Fine-tuning for NMT with Hybrid Losses | Oct 15, 2020 | de-enfr-en | CodeCode Available | 0 |
| On the Sub-Layer Functionalities of Transformer Decoder | Oct 6, 2020 | Decoderde-en | —Unverified | 0 |
| Learn to Talk via Proactive Knowledge Transfer | Aug 23, 2020 | de-enKnowledge Distillation | —Unverified | 0 |
| On the Importance of Local Information in Transformer Based Models | Aug 13, 2020 | de-enMRPC | —Unverified | 0 |