SOTAVerified

de-en

Papers

Showing 5160 of 82 papers

TitleStatusHype
Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training0
Embedding-Enhanced Giza++: Improving Alignment in Low- and High- Resource Scenarios Using Embedding Space GeometryCode0
OmniNet: Omnidirectional Representations from TransformersCode0
Predictive Attention Transformer: Improving Transformer with Attention Map Prediction0
The University of Edinburgh-Uppsala University’s Submission to the WMT 2020 Chat Translation Task0
Vocabulary Adaptation for Domain Adaptation in Neural Machine TranslationCode0
Pronoun-Targeted Fine-tuning for NMT with Hybrid LossesCode0
On the Sub-Layer Functionalities of Transformer Decoder0
Learn to Talk via Proactive Knowledge Transfer0
On the Importance of Local Information in Transformer Based Models0
Show:102550
← PrevPage 6 of 9Next →

No leaderboard results yet.